Patents by Inventor Akshaya RAMASWAMY

Akshaya RAMASWAMY has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230215176
    Abstract: This relates generally to a method and a system for spatio-temporal polarization video analysis. The spatio-temporal polarization data is analyzed for a computer vision application such as object detection, image classification, image captioning, image reconstruction or image inpainting, face recognition and action recognition. Numerous classical and deep learning methods have been applied on polarimetric data for polarimetric imaging analysis, however, the available pre-trained models may not be directly suitable on polarization data, as polarimetric data is more complex. Further compared to analysis of the polarimetric images, a significant number of actions can be detected by polarimetric videos, hence analyzing polarimetric videos is more efficient. The disclosure is a spatio-temporal analysis of polarization video.
    Type: Application
    Filed: December 21, 2022
    Publication date: July 6, 2023
    Applicant: Tata Consultancy Services Limited
    Inventors: Rokkam Krishna KANTH, Akshaya Ramaswamy, Achanna Anil Kumar, Jayavardhana Rama Gubbi Lakshminarasimha, Balamuralidhar Purushothaman
  • Patent number: 11657590
    Abstract: State of the art techniques in the domain of video analysis have limitations in terms of capability to extract the spatial and temporal information. This limitation in turn affects interpretation of the video data. The disclosure herein generally relates to video analysis, and, more particularly, to a method and system for video analysis to extract spatio-temporal information from a video being analyzed. The system uses a neural network architecture which has multiple layers to extract spatial and temporal information from the video being analyzed. The method of training the neural network that extracts a micro-scale information from a latent representation of the video is presented. This is generated using an attention network, which is then used to extract spatio-temporal information corresponding to the collected video, which is then used in multiple video analysis applications such as searching actions in videos, action detection and localization.
    Type: Grant
    Filed: March 1, 2021
    Date of Patent: May 23, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Jayavardhana Rama Gubbi Lakshminarasimha, Akshaya Ramaswamy, Karthik Seemakurthy, Balamuralidhar Purushothaman
  • Patent number: 11631247
    Abstract: State of the art techniques in the domain of video analysis have limitations in terms of capability to capture spatio-temporal representation. This limitation in turn affects interpretation of video data. The disclosure herein generally relates to video analysis, and, more particularly, to a method and system for video analysis to capture spatio-temporal representation for video reconstruction and analysis. The method presents different architecture variations using three main deep network components: 2D convolution units, 3D convolution units and long short-term memory (LSTM) units for video reconstruction and analysis. These variations are trained for learning the spatio-temporal representation of the videos in order to generate a pre-trained video analysis module. By understanding the advantages and disadvantages of different architectural configurations, a novel architecture is designed for video reconstruction.
    Type: Grant
    Filed: March 10, 2021
    Date of Patent: April 18, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Jayavardhana Rama Gubbi Lakshminarasimha, Akshaya Ramaswamy, Balamuralidhar Purushothaman, Aparna Kanakatte Gurumurthy, Avik Ghose
  • Publication number: 20220019804
    Abstract: State of the art techniques in the domain of video analysis have limitations in terms of capability to capture spatio-temporal representation. This limitation in turn affects interpretation of video data. The disclosure herein generally relates to video analysis, and, more particularly, to a method and system for video analysis to capture spatio-temporal representation for video reconstruction and analysis. The method presents different architecture variations using three main deep network components: 2D convolution units, 3D convolution units and long short-term memory (LSTM) units for video reconstruction and analysis. These variations are trained for learning the spatio-temporal representation of the videos in order to generate a pre-trained video analysis module. By understanding the advantages and disadvantages of different architectural configurations, a novel architecture is designed for video reconstruction.
    Type: Application
    Filed: March 10, 2021
    Publication date: January 20, 2022
    Applicant: Tata Consultancy Services Limited
    Inventors: Jayavardhana Rama Gubbi Lakshminarasimha, Akshaya Ramaswamy, Balamuralidhar Purushothaman, Apama Kanakatte Gurumurthy, Avik Ghose
  • Publication number: 20210390313
    Abstract: State of the art techniques in the domain of video analysis have limitations in terms of capability to extract the spatial and temporal information. This limitation in turn affects interpretation of the video data. The disclosure herein generally relates to video analysis, and, more particularly, to a method and system for video analysis to extract spatio-temporal information from a video being analyzed. The system uses a neural network architecture which has multiple layers to extract spatial and temporal information from the video being analyzed. The method of training the neural network that extracts a micro-scale information from a latent representation of the video is presented. This is generated using an attention network, which is then used to extract spatio-temporal information corresponding to the collected video, which is then used in multiple video analysis applications such as searching actions in videos, action detection and localization.
    Type: Application
    Filed: March 1, 2021
    Publication date: December 16, 2021
    Applicant: Tata Consultancy Services Limited
    Inventors: JAYAVARDHANA RAMA GUBBI LAKSHMINARASIMHA, Akshaya RAMASWAMY, Karthik SEEMAKURTHY, Balamuralidhar PURUSHOTHAMAN
  • Patent number: 11200657
    Abstract: State of the art image processing techniques such as background subtraction, and Convolutional Neural Network based approaches, when used for change detection, fail to support certain datasets. The disclosure herein generally relates to semantic change detection, and, more particularly, to a method and system for semantic change detection using a deep neural network feature correlation approach. An adaptive correlation layer is used by the system, which determines extent of computation required at pixel level, based on amount of information at pixels, and uses this information in further computation done for the semantic change detection. Information on the determined extent of computation required is then used to extract semantic features, which is then used to compute one or more correlation maps between the at least one feature map of a test image and corresponding reference image. Further the semantic changes are determined from the one or more correlation maps.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: December 14, 2021
    Assignee: Tata Consultancy Services Limited
    Inventors: Jayavardhana Rama Gubbi Lakshminarasimha, Akshaya Ramaswamy, Balamuralidhar Purushothaman, Ram Prabhakar Kathirvel, Venkatesh Babu Radhakrishnan
  • Publication number: 20210065354
    Abstract: State of the art image processing techniques such as background subtraction, and Convolutional Neural Network based approaches, when used for change detection, fail to support certain datasets. The disclosure herein generally relates to semantic change detection, and, more particularly, to a method and system for semantic change detection using a deep neural network feature correlation approach. An adaptive correlation layer is used by the system, which determines extent of computation required at pixel level, based on amount of information at pixels, and uses this information in further computation done for the semantic change detection. Information on the determined extent of computation required is then used to extract semantic features, which is then used to compute one or more correlation maps between the at least one feature map of a test image and corresponding reference image. Further the semantic changes are determined from the one or more correlation maps.
    Type: Application
    Filed: August 28, 2020
    Publication date: March 4, 2021
    Applicant: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Jayavardhana Rama GUBBI LAKSHMINARASIMHA, Akshaya RAMASWAMY, Balamuralidhar PURUSHOTHAMAN, Ram Prabhakar KATHIRVEL, Venkatesh Babu RADHAKRISHNAN
  • Patent number: 10803551
    Abstract: This disclosure relates generally to a method and a system for frame stitching based image construction for an indoor environment. The method enables constructing an image of a scene by stitching a plurality of key frames identified from a plurality of image frames captured by a mobile imaging device. The method overcomes multiple challenges posed by the indoor environment, effectively providing clean stitching of the key frames to construct the image of the scene. The method provides image stitching approach that combines visual data from the mobile imaging device and an inertial sensor from an Inertial Measurement Unit (IMU) mounted on the mobile imaging device, with a feedback for error correction to generate robust stitching outputs in indoor scenario.
    Type: Grant
    Filed: February 4, 2019
    Date of Patent: October 13, 2020
    Assignee: Tata Consultancy Services Limited
    Inventors: Jayavardhana Rama Gubbi Lakshminarasimha, Akshaya Ramaswamy, Rishin Raj, Balamuralidhar Purushothaman
  • Patent number: 10679098
    Abstract: The disclosure herein generally relate to scene change detection, and, more particularly, to use of Unmanned Vehicle (UV) to inspect a scene and perform a scene change detection using UVs. When a UV performs visual inspection of an area or an object, due to various factors, such as but not limited to environmental factors, and movement of object and/or the UV, image of the area/object captured by the drone lacks clarity, which in turn makes detection of any object a difficult task. The UV disclosed herein uses a multi scale super pixel technique for visual change detection, in order to solve the aforementioned issues. In an embodiment, the UV captures an image, identifies a reference image that matches the captured image, and generates a change map. The multi-scale super pixel analysis is then performed on this change map to detect changes between the captured image and the reference image.
    Type: Grant
    Filed: February 22, 2018
    Date of Patent: June 9, 2020
    Assignee: Tata Consultancy Services Limited
    Inventors: Jayavardhana Rama Gubbi Lakshminarasimha, Akshaya Ramaswamy, Sandeep Nellyam Kunnath, Ashley Varghese, Balamuralidhar Purushothaman
  • Publication number: 20190333187
    Abstract: This disclosure relates generally to a method and a system for frame stitching based image construction for an indoor environment. The method enables constructing an image of a scene by stitching a plurality of key frames identified from a plurality of image frames captured by a mobile imaging device. The method overcomes multiple challenges posed by the indoor environment, effectively providing clean stitching of the key frames to construct the image of the scene. The method provides image stitching approach that combines visual data from the mobile imaging device and an inertial sensor from an Inertial Measurement Unit (IMU) mounted on the mobile imaging device, with a feedback for error correction to generate robust stitching outputs in indoor scenario.
    Type: Application
    Filed: February 4, 2019
    Publication date: October 31, 2019
    Applicant: Tata Consultancy Services Limited
    Inventors: Jayavardhana Rama GUBBI LAKSHMINARASIMHA, Akshaya RAMASWAMY, Rishin RAJ, Balamuralidhar PURUSHOTHAMAN
  • Publication number: 20190164009
    Abstract: The disclosure herein generally relate to scene change detection, and, more particularly, to use of Unmanned Vehicle (UV) to inspect a scene and perform a scene change detection using UVs. When a UV performs visual inspection of an area or an object, due to various factors, such as but not limited to environmental factors, and movement of object and/or the UV, image of the area/object captured by the drone lacks clarity, which in turn makes detection of any object a difficult task. The UV disclosed herein uses a multi scale super pixel technique for visual change detection, in order to solve the aforementioned issues. In an embodiment, the UV captures an image, identifies a reference image that matches the captured image, and generates a change map. The multi-scale super pixel analysis is then performed on this change map to detect changes between the captured image and the reference image.
    Type: Application
    Filed: February 22, 2018
    Publication date: May 30, 2019
    Applicant: Tata Consultancy Services Limited
    Inventors: Jayavardhana Rama GUBBI LAKSHMINARASIMHA, Akshaya RAMASWAMY, Sandeep Nellyam KUNNATH, Ashley VARGHESE, Balamuralidhar PURUSHOTHAMAN