Patents by Inventor Mithun ULIYAR
Mithun ULIYAR has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240005157Abstract: Embodiments provide methods and systems for unstructured pruning of a neural network. Method performed by a neural network pruning system includes accessing a trained neural network to be pruned. The trained neural network includes one or more neural layers. The method includes computing values of layer parameters for a filter associated with a neural layer based, at least in part, on a pruning criteria. The method further includes computing a tag identifier associated with the filter of the trained neural network based, at least in part, on corresponding values of layer parameters of the filter. The method further includes storing the tag identifier and the values of the layer parameters for filter of the trained neural network in a database.Type: ApplicationFiled: September 15, 2021Publication date: January 4, 2024Inventors: Krishna A G, Mithun ULIYAR, Ravi SHENOY
-
Patent number: 11074770Abstract: The invention provides a system and method of monitoring a vehicle during commute. The system comprising of one or more camera modules in communication with one or more computation device. A camera module is configured to collect multiple data during the commute including video frames of one or more views, and location, position, direction, orientation, velocity and the combination thereof of the vehicle. The collected data is sent to a computation device where the data is analyzed to identify events and the same will be notified to one or more user device(s). In one embodiment, the communication between the computation device and the camera module may be carried out wirelessly via Wi-Fi where the camera module is configured to act in one or more modes of communication such as an access point (AP) mode or a station (STA) mode or Wi-Fi direct or the combination thereof.Type: GrantFiled: May 10, 2018Date of Patent: July 27, 2021Assignee: Lightmetrics Technologies PVT. LTD.Inventors: Mithun Uliyar, Ravi Shenoy, Soumik Ukil, Krishna A G, Gururaj Putraya, Pushkar Patwardhan
-
Patent number: 10974728Abstract: Methods and systems for facilitating driver behavior monitoring and evaluation of driver performance during a trip are provided. The method includes facilitating, by a processing system of an on-board detection device positioned in a vehicle, recording of media data using cameras mounted on the vehicle and multisensory data using sensors positioned in the vehicle. The media data includes a plurality of image frames. The method includes generating metadata based on at least one image frame of the plurality of image frames and the multisensory data, detecting an occurrence of an event based at least on the metadata and a set of configuration parameters, uploading the metadata and the media data of the event to a cloud server based on event upload rules, and facilitating a media status update flag corresponding to the event, the media status update flag representing a completion of uploading of the media data of the event.Type: GrantFiled: September 10, 2019Date of Patent: April 13, 2021Assignee: LIGHTMETRICS TECHNOLOGIES PVT. LTD.Inventors: Krishna A G, Pushkar Patwardhan, Gururaj Putraya, Ravi Shenoy, Soumik Ukil, Mithun Uliyar
-
Publication number: 20200168014Abstract: The invention provides a system and method of monitoring a vehicle during commute. The system comprising of one or more camera modules in communication with one or more computation device. A camera module is configured to collect multiple data during the commute including video frames of one or more views, and location, position, direction, orientation, velocity and the combination thereof of the vehicle. The collected data is sent to a computation device where the data is analyzed to identify events and the same will be notified to one or more user device(s). In one embodiment, the communication between the computation device and the camera module may be carried out wirelessly via Wi-Fi where the camera module is configured to act in one or more modes of communication such as an access point (AP) mode or a station (STA) mode or Wi-Fi direct or the combination thereof.Type: ApplicationFiled: May 10, 2018Publication date: May 28, 2020Inventors: Mithun ULIYAR, Ravi SHENOY, Soumik UKIL, Krishna A G, Gururaj PUTRAYA, Pushkar PATWARDHAN
-
Publication number: 20200079387Abstract: Methods and systems for facilitating driver behavior monitoring and evaluation of driver performance during a trip are provided. The method includes facilitating, by a processing system of an on-board detection device positioned in a vehicle, recording of media data using cameras mounted on the vehicle and multisensory data using sensors positioned in the vehicle. The media data includes a plurality of image frames. The method includes generating metadata based on at least one image frame of the plurality of image frames and the multisensory data, detecting an occurrence of an event based at least on the metadata and a set of configuration parameters, uploading the metadata and the media data of the event to a cloud server based on event upload rules, and facilitating a media status update flag corresponding to the event, the media status update flag representing a completion of uploading of the media data of the event.Type: ApplicationFiled: September 10, 2019Publication date: March 12, 2020Inventors: Krishna A G, Pushkar PATWARDHAN, Gururaj PUTRAYA, Ravi SHENOY, Soumik UKIL, Mithun ULIYAR
-
Patent number: 10529083Abstract: A method for estimating distance of an object from a moving vehicle is provided. The method includes detecting, by a camera module in one or more image frames, an object on a road on which the vehicle is moving. The method includes electronically determining a pair of lane markings associated with the road. The method further includes electronically determining a lane width between the pair of the lane markings in an image coordinate of the one or more image frames. The lane width is determined at a location of the object on the road. The method includes electronically determininga real world distance of the object from the vehicle based at least on number of pixels corresponding to the lane width in the image coordinate, a pre-defined lane width associated with the road and at least one camera parameter of the camera module.Type: GrantFiled: December 6, 2017Date of Patent: January 7, 2020Assignee: Lighmetrics Technologies Pvt. Ltd.Inventors: Mithun Uliyar, Ravi Shenoy, Soumik Ukil, Krishna A G, Gururaj Putraya, Pushkar Patwardhan
-
Patent number: 10491810Abstract: In an example embodiment, method, apparatus and computer program product are provided. The method includes accessing image capture parameters of a plurality of component cameras at a first time instant, where the image capture parameters for a respective component camera of the component cameras are determined based on a scene appearing in a field of view (FOV) of the respective component camera. At a second time instant, a change in appearance of one or more objects of the scene from a FOV of a first component camera to a FOV of a second component camera is determined. Upon determining the change of the appearance of the one or more objects at the second time instant, image capture parameters of the second component camera are set based on image capture parameters of the first component camera accessed at the first time instant.Type: GrantFiled: February 28, 2017Date of Patent: November 26, 2019Assignee: Nokia Technologies OyInventors: Krishna Govindarao, Mithun Uliyar, Ravi Shenoy
-
Patent number: 10275669Abstract: Advanced driver assistance systems (ADAS) and methods for object detection such as traffic lights, speed signs, in an automotive environment, are disclosed. In an embodiment, ADAS includes camera system for capturing image frames of at least a part of surroundings of vehicle, memory comprising image processing instructions and processing system for detecting one or more objects in a coarse detection followed by a fine detection. Coarse detection includes detecting presence of the one or more objects in non-consecutive image frames of the image frames, where non-consecutive image frames are determined by skipping one or more frames of the image frames. Upon detection of presence of the one or more objects in coarse detection, fine detection of the one or more objects is performed in a predetermined number of neighboring image frames of a frame in which the presence of the objects is detected in coarse detection.Type: GrantFiled: September 8, 2016Date of Patent: April 30, 2019Assignee: Lightmetrics Technologies PVT. LTD.Inventors: Mithun Uliyar, Gururaj Putraya, Soumik Ukil, Krishna A G, Ravi Shenoy, Pushkar Patwardhan
-
Patent number: 10257395Abstract: In an example embodiment, method, apparatus and computer program product are provided. The method includes facilitating receipt of a plurality of VR contents and a plurality of video contents associated with an event captured by a plurality of VR cameras and a plurality of user camera devices, respectively. Each of the plurality of VR cameras comprises a plurality of camera modules with respective field of views (FOVs) associated with the event. The FOVs of the plurality of user camera devices are linked with respective FOVs of camera modules of the plurality of VR cameras based on at least a threshold degree of similarity between the FOV of the user camera device and the FOV of the camera module. The processor creates an event VR content by combining the plurality of VR contents and the plurality of video contents based on the linking of the FOVs.Type: GrantFiled: October 23, 2017Date of Patent: April 9, 2019Assignee: Nokia Technologies OyInventors: Mithun Uliyar, Soumik Ukil
-
Publication number: 20190052800Abstract: In an example embodiment, method, apparatus and computer program product are provided. The method includes accessing image capture parameters of a plurality of component cameras at a first time instant, where the image capture parameters for a respective component camera of the component cameras are determined based on a scene appearing in a field of view (FOV) of the respective component camera. At a second time instant, a change in appearance of one or more objects of the scene from a FOV of a first component camera to a FOV of a second component camera is determined. Upon determining the change of the appearance of the one or more objects at the second time instant, image capture parameters of the second component camera are set based on image capture parameters of the first component camera accessed at the first time instant.Type: ApplicationFiled: February 28, 2017Publication date: February 14, 2019Inventors: Krishna Govindarao, Mithun Uliyar, Ravi Shenoy
-
Patent number: 10176558Abstract: In an example embodiment a method, apparatus and computer program product are provided. The method includes determining presence of at least one moving object in a scene based on two or more burst images corresponding to the scene captured by a first camera. One or more portions of the scene associated with the at least one moving object are identified, and, information related to the one or more portions is provided to a second camera. An image of the scene captured by the second camera second camera is received, where a pixel level shutter disposed in front of an image sensor of the second camera is programmed to periodically open and close, throughout a duration of said image capture, for pixels of the image sensor corresponding to the one or more portions of the scene. A deblurred image corresponding to the scene is generated based on the image.Type: GrantFiled: December 1, 2015Date of Patent: January 8, 2019Assignee: Nokia Technologies OYInventors: Mithun Uliyar, Krishna Govindarao, Gururaj Putraya, Basavaraja S V
-
Patent number: 10065652Abstract: A method and system for driver monitoring by fusing contextual data with event data to determine context as cause of event is provided. The method includes detecting an event from event data received from one or more inertial sensors associated with a vehicle of a driver or with the driver. The method also includes determining a context from an audio or video feed received from one or more devices associated with the vehicle or the driver. Further, the method includes fusing the event with the context to determine the context as a cause of the event. In addition, the method includes assigning a score to a driver performance metric based at least on the context and the event.Type: GrantFiled: March 26, 2016Date of Patent: September 4, 2018Assignee: Lightmetrics Technologies PVT. LTD.Inventors: Ravi Shenoy, Krishna A G, Gururaj Putraya, Soumik Ukil, Mithun Uliyar, Pushkar Patwardhan
-
Patent number: 10003743Abstract: In an example embodiment, a method, apparatus and computer program product are provided. The method includes facilitating receipt of a plurality of light-field images of a scene captured in a burst capture by a light-field camera. The method includes determining shifts between images of the plurality of light-field images, where the shifts between the images of the plurality of light-field images are associated with shake of the light-field camera while capturing the plurality of light-field images. The method includes generating a plurality of depth maps for the plurality of light-field images, and generating a set of view images of the scene based on the plurality of light-field images and the plurality of depth maps. The method includes generating a refocus image by combining the set of view images based at least on the shifts between the images of the plurality of light-field images.Type: GrantFiled: December 16, 2014Date of Patent: June 19, 2018Assignee: Nokia Technologies OyInventors: Basavaraja S V, Mithun Uliyar, Gururaj Gopal Putraya
-
Publication number: 20180165822Abstract: A method for estimating distance of an object from a moving vehicle is provided. The method includes detecting, by a camera module in one or more image frames, an object on a road on which the vehicle is moving. The method includes electronically determining a pair of lane markings associated with the road. The method further includes electronically determining a lane width between the pair of the lane markings in an image coordinate of the one or more image frames. The lane width is determined at a location of the object on the road. The method includes electronically determininga real world distance of the object from the vehicle based at least on number of pixels corresponding to the lane width in the image coordinate, a pre-defined lane width associated with the road and at least one camera parameter of the camera module.Type: ApplicationFiled: December 6, 2017Publication date: June 14, 2018Inventors: Mithun ULIYAR, Ravi SHENOY, Soumik UKIL, Krishna A G, Gururaj PUTRAYA, Pushkar PATWARDHAN
-
Publication number: 20180124291Abstract: In an example embodiment, method, apparatus and computer program product are provided. The method includes facilitating receipt of a plurality of VR contents and a plurality of video contents associated with an event captured by a plurality of VR cameras and a plurality of user camera devices, respectively. Each of the plurality of VR cameras comprises a plurality of camera modules with respective field of views (FOVs) associated with the event. The FOVs of the plurality of user camera devices are linked with respective FOVs of camera modules of the plurality of VR cameras based on at least a threshold degree of similarity between the FOV of the user camera device and the FOV of the camera module. The processor creates an event VR content by combining the plurality of VR contents and the plurality of video contents based on the linking of the FOVs.Type: ApplicationFiled: October 23, 2017Publication date: May 3, 2018Inventors: Mithun Uliyar, Soumik Ukil
-
Patent number: 9936131Abstract: A method, apparatus, and computer program product for generating a seamless and error-free panorama image using a selective set of views from a multi-view image from each of the several captured light field (LF) images. The method identifies the view and corresponding image for each captured LF image which has the same or closely located center of projection with the views of neighboring LF images. Image registration and warping techniques are applied across the images and the parallax error is calculated which indicates the closeness of their center of projections. The view from each LF captured image with minimal parallax error is selected and stitched together with the other views identified as having minimal parallax error.Type: GrantFiled: July 16, 2013Date of Patent: April 3, 2018Assignee: Nokia Technologies OyInventors: Gururaj Gopal Putraya, Basavaraja S V, Mithun Uliyar
-
Patent number: 9888364Abstract: A method and apparatus for localizing a smartphone is disclosed. A plurality of lateral position samples of the smartphone based at least in part on a plurality of location coordinates, lane marking information and accuracy factors of the plurality of location coordinates are collected at a plurality of time instances. Lateral position of the smartphone in the moving vehicle based at least on the plurality of lateral position samples collected at the plurality of time instances is determined. One or more correlations between a vertical acceleration pattern associated with at least one front wheel and a vertical acceleration pattern associated with at least one rear wheel of the moving vehicle passing over one or more road discontinuities as computed from an accelerometer sensor present in the smartphone are determined. Longitudinal position of the smartphone in the moving vehicle based at least on the one or more correlations is determined.Type: GrantFiled: June 14, 2017Date of Patent: February 6, 2018Assignee: Lightmetrics Technologies Pvt. Ltd.Inventors: Mithun Uliyar, Ravi Shenoy, Soumik Ukil, Krishna A G, Gururaj Putraya, Pushkar Patwardhan
-
Publication number: 20180001899Abstract: A method and system for driver monitoring by fusing contextual data with event data to determine context as cause of event is provided. The method includes detecting an event from event data received from one or more inertial sensors associated with a vehicle of a driver or with the driver. The method also includes determining a context from an audio or video feed received from one or more devices associated with the vehicle or the driver. Further, the method includes fusing the event with the context to determine the context as a cause of the event. In addition, the method includes assigning a score to a driver performance metric based at least on the context and the event.Type: ApplicationFiled: March 26, 2016Publication date: January 4, 2018Inventors: Ravi SHENOY, Krishna A G, Gururaj PUTRAYA, Soumik UKIL, Mithun ULIYAR, Pushkar PATWARDHAN
-
Publication number: 20170366945Abstract: A method and apparatus for localizing a smartphone is disclosed. A plurality of lateral position samples of the smartphone based at least in part on a plurality of location coordinates, lane marking information and accuracy factors of the plurality of location coordinates are collected at a plurality of time instances. Lateral position of the smartphone in the moving vehicle based at least on the plurality of lateral position samples collected at the plurality of time instances is determined. One or more correlations between a vertical acceleration pattern associated with at least one front wheel and a vertical acceleration pattern associated with at least one rear wheel of the moving vehicle passing over one or more road discontinuities as computed from an accelerometer sensor present in the smartphone are determined. Longitudinal position of the smartphone in the moving vehicle based at least on the one or more correlations is determined.Type: ApplicationFiled: June 14, 2017Publication date: December 21, 2017Inventors: Mithun ULIYAR, Ravi SHENOY, Soumik UKIL, Krishna A G, Gururaj PUTRAYA, Pushkar PATWARDHAN
-
Publication number: 20170351932Abstract: In an example embodiment a method, apparatus and computer program product are provided. The method includes facilitating simultaneous capture of a first image by a first camera and a second image by a second camera associated with a device. One or more distortion parameters associated with a distortion in the second image may be determined based on a comparison of the second image with at least one template image associated with the second image. A distortion-free first image is generated based on the one or more distortion parameters associated with the second image by performing one of applying the one or more distortion parameters to the first image, and estimating one or more distortion parameters associated with the first image based on the one or more distortion parameters associated with the second image, and applying, the one or more distortion parameters associated with the first image to the first image.Type: ApplicationFiled: November 23, 2015Publication date: December 7, 2017Inventors: Mithun Uliyar, Gururaj Putraya, Basavaraja S V