Patents by Inventor Akshaya RAMASWAMY
Akshaya RAMASWAMY has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230215176Abstract: This relates generally to a method and a system for spatio-temporal polarization video analysis. The spatio-temporal polarization data is analyzed for a computer vision application such as object detection, image classification, image captioning, image reconstruction or image inpainting, face recognition and action recognition. Numerous classical and deep learning methods have been applied on polarimetric data for polarimetric imaging analysis, however, the available pre-trained models may not be directly suitable on polarization data, as polarimetric data is more complex. Further compared to analysis of the polarimetric images, a significant number of actions can be detected by polarimetric videos, hence analyzing polarimetric videos is more efficient. The disclosure is a spatio-temporal analysis of polarization video.Type: ApplicationFiled: December 21, 2022Publication date: July 6, 2023Applicant: Tata Consultancy Services LimitedInventors: Rokkam Krishna KANTH, Akshaya Ramaswamy, Achanna Anil Kumar, Jayavardhana Rama Gubbi Lakshminarasimha, Balamuralidhar Purushothaman
-
Patent number: 11657590Abstract: State of the art techniques in the domain of video analysis have limitations in terms of capability to extract the spatial and temporal information. This limitation in turn affects interpretation of the video data. The disclosure herein generally relates to video analysis, and, more particularly, to a method and system for video analysis to extract spatio-temporal information from a video being analyzed. The system uses a neural network architecture which has multiple layers to extract spatial and temporal information from the video being analyzed. The method of training the neural network that extracts a micro-scale information from a latent representation of the video is presented. This is generated using an attention network, which is then used to extract spatio-temporal information corresponding to the collected video, which is then used in multiple video analysis applications such as searching actions in videos, action detection and localization.Type: GrantFiled: March 1, 2021Date of Patent: May 23, 2023Assignee: TATA CONSULTANCY SERVICES LIMITEDInventors: Jayavardhana Rama Gubbi Lakshminarasimha, Akshaya Ramaswamy, Karthik Seemakurthy, Balamuralidhar Purushothaman
-
Patent number: 11631247Abstract: State of the art techniques in the domain of video analysis have limitations in terms of capability to capture spatio-temporal representation. This limitation in turn affects interpretation of video data. The disclosure herein generally relates to video analysis, and, more particularly, to a method and system for video analysis to capture spatio-temporal representation for video reconstruction and analysis. The method presents different architecture variations using three main deep network components: 2D convolution units, 3D convolution units and long short-term memory (LSTM) units for video reconstruction and analysis. These variations are trained for learning the spatio-temporal representation of the videos in order to generate a pre-trained video analysis module. By understanding the advantages and disadvantages of different architectural configurations, a novel architecture is designed for video reconstruction.Type: GrantFiled: March 10, 2021Date of Patent: April 18, 2023Assignee: TATA CONSULTANCY SERVICES LIMITEDInventors: Jayavardhana Rama Gubbi Lakshminarasimha, Akshaya Ramaswamy, Balamuralidhar Purushothaman, Aparna Kanakatte Gurumurthy, Avik Ghose
-
Publication number: 20220019804Abstract: State of the art techniques in the domain of video analysis have limitations in terms of capability to capture spatio-temporal representation. This limitation in turn affects interpretation of video data. The disclosure herein generally relates to video analysis, and, more particularly, to a method and system for video analysis to capture spatio-temporal representation for video reconstruction and analysis. The method presents different architecture variations using three main deep network components: 2D convolution units, 3D convolution units and long short-term memory (LSTM) units for video reconstruction and analysis. These variations are trained for learning the spatio-temporal representation of the videos in order to generate a pre-trained video analysis module. By understanding the advantages and disadvantages of different architectural configurations, a novel architecture is designed for video reconstruction.Type: ApplicationFiled: March 10, 2021Publication date: January 20, 2022Applicant: Tata Consultancy Services LimitedInventors: Jayavardhana Rama Gubbi Lakshminarasimha, Akshaya Ramaswamy, Balamuralidhar Purushothaman, Apama Kanakatte Gurumurthy, Avik Ghose
-
Publication number: 20210390313Abstract: State of the art techniques in the domain of video analysis have limitations in terms of capability to extract the spatial and temporal information. This limitation in turn affects interpretation of the video data. The disclosure herein generally relates to video analysis, and, more particularly, to a method and system for video analysis to extract spatio-temporal information from a video being analyzed. The system uses a neural network architecture which has multiple layers to extract spatial and temporal information from the video being analyzed. The method of training the neural network that extracts a micro-scale information from a latent representation of the video is presented. This is generated using an attention network, which is then used to extract spatio-temporal information corresponding to the collected video, which is then used in multiple video analysis applications such as searching actions in videos, action detection and localization.Type: ApplicationFiled: March 1, 2021Publication date: December 16, 2021Applicant: Tata Consultancy Services LimitedInventors: JAYAVARDHANA RAMA GUBBI LAKSHMINARASIMHA, Akshaya RAMASWAMY, Karthik SEEMAKURTHY, Balamuralidhar PURUSHOTHAMAN
-
Patent number: 11200657Abstract: State of the art image processing techniques such as background subtraction, and Convolutional Neural Network based approaches, when used for change detection, fail to support certain datasets. The disclosure herein generally relates to semantic change detection, and, more particularly, to a method and system for semantic change detection using a deep neural network feature correlation approach. An adaptive correlation layer is used by the system, which determines extent of computation required at pixel level, based on amount of information at pixels, and uses this information in further computation done for the semantic change detection. Information on the determined extent of computation required is then used to extract semantic features, which is then used to compute one or more correlation maps between the at least one feature map of a test image and corresponding reference image. Further the semantic changes are determined from the one or more correlation maps.Type: GrantFiled: August 28, 2020Date of Patent: December 14, 2021Assignee: Tata Consultancy Services LimitedInventors: Jayavardhana Rama Gubbi Lakshminarasimha, Akshaya Ramaswamy, Balamuralidhar Purushothaman, Ram Prabhakar Kathirvel, Venkatesh Babu Radhakrishnan
-
Publication number: 20210065354Abstract: State of the art image processing techniques such as background subtraction, and Convolutional Neural Network based approaches, when used for change detection, fail to support certain datasets. The disclosure herein generally relates to semantic change detection, and, more particularly, to a method and system for semantic change detection using a deep neural network feature correlation approach. An adaptive correlation layer is used by the system, which determines extent of computation required at pixel level, based on amount of information at pixels, and uses this information in further computation done for the semantic change detection. Information on the determined extent of computation required is then used to extract semantic features, which is then used to compute one or more correlation maps between the at least one feature map of a test image and corresponding reference image. Further the semantic changes are determined from the one or more correlation maps.Type: ApplicationFiled: August 28, 2020Publication date: March 4, 2021Applicant: TATA CONSULTANCY SERVICES LIMITEDInventors: Jayavardhana Rama GUBBI LAKSHMINARASIMHA, Akshaya RAMASWAMY, Balamuralidhar PURUSHOTHAMAN, Ram Prabhakar KATHIRVEL, Venkatesh Babu RADHAKRISHNAN
-
Patent number: 10803551Abstract: This disclosure relates generally to a method and a system for frame stitching based image construction for an indoor environment. The method enables constructing an image of a scene by stitching a plurality of key frames identified from a plurality of image frames captured by a mobile imaging device. The method overcomes multiple challenges posed by the indoor environment, effectively providing clean stitching of the key frames to construct the image of the scene. The method provides image stitching approach that combines visual data from the mobile imaging device and an inertial sensor from an Inertial Measurement Unit (IMU) mounted on the mobile imaging device, with a feedback for error correction to generate robust stitching outputs in indoor scenario.Type: GrantFiled: February 4, 2019Date of Patent: October 13, 2020Assignee: Tata Consultancy Services LimitedInventors: Jayavardhana Rama Gubbi Lakshminarasimha, Akshaya Ramaswamy, Rishin Raj, Balamuralidhar Purushothaman
-
Patent number: 10679098Abstract: The disclosure herein generally relate to scene change detection, and, more particularly, to use of Unmanned Vehicle (UV) to inspect a scene and perform a scene change detection using UVs. When a UV performs visual inspection of an area or an object, due to various factors, such as but not limited to environmental factors, and movement of object and/or the UV, image of the area/object captured by the drone lacks clarity, which in turn makes detection of any object a difficult task. The UV disclosed herein uses a multi scale super pixel technique for visual change detection, in order to solve the aforementioned issues. In an embodiment, the UV captures an image, identifies a reference image that matches the captured image, and generates a change map. The multi-scale super pixel analysis is then performed on this change map to detect changes between the captured image and the reference image.Type: GrantFiled: February 22, 2018Date of Patent: June 9, 2020Assignee: Tata Consultancy Services LimitedInventors: Jayavardhana Rama Gubbi Lakshminarasimha, Akshaya Ramaswamy, Sandeep Nellyam Kunnath, Ashley Varghese, Balamuralidhar Purushothaman
-
Publication number: 20190333187Abstract: This disclosure relates generally to a method and a system for frame stitching based image construction for an indoor environment. The method enables constructing an image of a scene by stitching a plurality of key frames identified from a plurality of image frames captured by a mobile imaging device. The method overcomes multiple challenges posed by the indoor environment, effectively providing clean stitching of the key frames to construct the image of the scene. The method provides image stitching approach that combines visual data from the mobile imaging device and an inertial sensor from an Inertial Measurement Unit (IMU) mounted on the mobile imaging device, with a feedback for error correction to generate robust stitching outputs in indoor scenario.Type: ApplicationFiled: February 4, 2019Publication date: October 31, 2019Applicant: Tata Consultancy Services LimitedInventors: Jayavardhana Rama GUBBI LAKSHMINARASIMHA, Akshaya RAMASWAMY, Rishin RAJ, Balamuralidhar PURUSHOTHAMAN
-
Publication number: 20190164009Abstract: The disclosure herein generally relate to scene change detection, and, more particularly, to use of Unmanned Vehicle (UV) to inspect a scene and perform a scene change detection using UVs. When a UV performs visual inspection of an area or an object, due to various factors, such as but not limited to environmental factors, and movement of object and/or the UV, image of the area/object captured by the drone lacks clarity, which in turn makes detection of any object a difficult task. The UV disclosed herein uses a multi scale super pixel technique for visual change detection, in order to solve the aforementioned issues. In an embodiment, the UV captures an image, identifies a reference image that matches the captured image, and generates a change map. The multi-scale super pixel analysis is then performed on this change map to detect changes between the captured image and the reference image.Type: ApplicationFiled: February 22, 2018Publication date: May 30, 2019Applicant: Tata Consultancy Services LimitedInventors: Jayavardhana Rama GUBBI LAKSHMINARASIMHA, Akshaya RAMASWAMY, Sandeep Nellyam KUNNATH, Ashley VARGHESE, Balamuralidhar PURUSHOTHAMAN