Patents by Inventor Karthikeyan Shanmugavadivelu
Karthikeyan Shanmugavadivelu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10728529Abstract: Systems and methods for synchronizing frame captures for cameras with different fields of capture are described. An example device includes a first camera and second camera. The first camera includes a first camera sensor and a first rolling shutter. The first camera is configured to prevent scanning pixels from a first row to a row n of the first camera sensor and configured to begin sequentially scanning pixels of the row n of the first camera sensor. The second camera includes a second camera sensor and a second rolling shutter. The second camera is configured to begin sequentially scanning pixels of a first row of the second camera sensor concurrently with beginning to sequentially scan pixels of the row n of the first camera sensor. The first row of the second camera sensor corresponds to a row within a predefined number of rows after the first camera sensor's row n.Type: GrantFiled: September 6, 2018Date of Patent: July 28, 2020Assignee: QUALCOMM IncorporatedInventors: Narayana Karthik Ravirala, Jiafu Luo, Shizhong Liu, Karthikeyan Shanmugavadivelu
-
Publication number: 20200084432Abstract: Systems and methods for synchronizing frame captures for cameras with different fields of capture are described. An example device includes a first camera and second camera. The first camera includes a first camera sensor and a first rolling shutter. The first camera is configured to prevent scanning pixels from a first row to a row n of the first camera sensor and configured to begin sequentially scanning pixels of the row n of the first camera sensor. The second camera includes a second camera sensor and a second rolling shutter. The second camera is configured to begin sequentially scanning pixels of a first row of the second camera sensor concurrently with beginning to sequentially scan pixels of the row n of the first camera sensor. The first row of the second camera sensor corresponds to a row within a predefined number of rows after the first camera sensor's row n.Type: ApplicationFiled: September 6, 2018Publication date: March 12, 2020Inventors: Narayana Karthik Ravirala, Jiafu Luo, Shizhong Liu, Karthikeyan Shanmugavadivelu
-
Publication number: 20200053255Abstract: Aspects of the present disclosure relate to systems and methods for temporal alignment of image frames. An example device may include a processor coupled to a memory. The processor may be configured to receive a first stream of image frames from a first camera for an imaging application being executed by the device and receive a second stream of image frames from a second camera for the imaging application being executed by the device. The processor also may be configured to, for a first image frame of the first stream, associate the first image frame with an at least one image frame of the second stream using a type of association. The type of association may be based on the imaging application. The processor further may be configured to provide the associated first image frame and the at least one image frame of the second stream for processing in executing the imaging application.Type: ApplicationFiled: August 8, 2018Publication date: February 13, 2020Inventors: Cullum Baldwin, Karthikeyan Shanmugavadivelu
-
Publication number: 20190347822Abstract: Aspects of the present disclosure relate to systems and methods for determining or calibrating for a spatial relationship for multiple cameras. An example device may include one or more processors. The example device may also include a memory coupled to the one or more processors and including instructions that, when executed by the one or more processors, cause the device to receive a plurality of corresponding images of scenes from multiple cameras during normal operation, accumulate a plurality of keypoints in the scenes from the plurality of corresponding images, measure a disparity for each keypoint of the plurality of keypoints, exclude one or more keypoints with a disparity greater than a threshold, and determine, from the plurality of remaining keypoints, a yaw for a camera of the multiple cameras.Type: ApplicationFiled: May 11, 2018Publication date: November 14, 2019Inventors: James Nash, Narayana Karthik Ravirala, Karthikeyan Shanmugavadivelu
-
Patent number: 10277888Abstract: Systems and methods of triggering an event based on meeting a certain depth criteria in an image. One innovation of a method includes a identifying at least one object in a field of view of an imaging device, the imaging device configured to capture at least one image of the field of view, determining a threshold depth level, measuring a depth of the at least one object within the field of view with respect to the imaging device, comparing the measured depth of the at least one object to the threshold depth level, and capturing an image of the object when the depth of the object within the field of view exceeds the threshold depth level.Type: GrantFiled: January 16, 2015Date of Patent: April 30, 2019Assignee: QUALCOMM IncorporatedInventors: Shandon Campbell, Kalin Mitkov Atanassov, Karthikeyan Shanmugavadivelu, Stephen Michael Verrall
-
Patent number: 10194089Abstract: Devices and methods for providing seamless preview images for multi-camera devices having two or more asymmetric cameras. A multi-camera device may include two asymmetric cameras disposed to image a target scene. The multi-camera device further includes a processor coupled to a memory component and a display, the processor configured to retrieve an image generated by a first camera from the memory component, retrieve an image generated by a second camera from the memory component, receive input corresponding to a preview zoom level, retrieve spatial transform information and photometric transform information from memory, modify at least one image received from the first and second cameras by the spatial transform and the photometric transform, and provide on the display a preview image comprising at least a portion of the at least one modified image and a portion of either the first image or the second image based on the preview zoom level.Type: GrantFiled: February 8, 2016Date of Patent: January 29, 2019Assignee: QUALCOMM IncorporatedInventors: James Wilson Nash, Kalin Mitkov Atanassov, Sergiu Radu Goma, Narayana Karthik Sadanandam Ravirala, Venkata Ravi Kiran Dayana, Karthikeyan Shanmugavadivelu
-
Patent number: 10097747Abstract: A method of performing an image autofocus operation using multiple cameras includes performing, at an image processor, a first autofocus operation on a first region of interest in a scene captured by a first camera and determining a second region of interest in the scene captured by a second camera. The second region of interest is determined based on the first region of interest. The method further includes performing a second autofocus operation on the second region of interest. The method also includes fusing a first image of the scene captured by the first camera with a second image of the scene captured by the second camera to generate a fused image. The first image is based on the first autofocus operation and the second image is based on the second autofocus operation.Type: GrantFiled: October 21, 2015Date of Patent: October 9, 2018Assignee: QUALCOMM IncorporatedInventors: Venkata Ravi Kiran Dayana, Karthikeyan Shanmugavadivelu, Narayana Karthik Sadanandam Ravirala
-
Patent number: 10061182Abstract: Method and systems for autofocus triggering focusing are disclosed herein. In one example, a system may include a lens, a memory component configured to store lens parameters of the lens and regions of focus corresponding to the lens parameters, and a processor coupled to the memory and the lens. The processor may be configured to focus the lens on a target object at a first instance of time, receive information indicative of distances from an imaging device to the target object over a period of time, obtain lens parameters of the lens, and determine a region of focus, and trigger the lens to re-focus on the target object if the distance to the target object indicates the target object is outside of the region of focus and the distance to the target object is unchanged for a designated time period.Type: GrantFiled: June 9, 2017Date of Patent: August 28, 2018Assignee: QUALCOMM IncorporatedInventors: Venkata Ravi Kiran Dayana, Karthikeyan Shanmugavadivelu, Shizhong Liu, Ying Chen Lou, Hung-Hsin Wu, Narayana Karthik Sadanandam Ravirala, Adarsh Abhay Golikeri
-
Patent number: 9973722Abstract: Systems, devices, and methods of displaying and/or recording multiple pictures in a picture (PIP) on the same display of a digital display device are disclosed. The PIPs can show objects from the main field view of the display device, such as a front camera lens, as well as objects from a different field of view, such as a back camera lens. The PIPs can further track the objects that are being displayed.Type: GrantFiled: February 20, 2014Date of Patent: May 15, 2018Assignee: QUALCOMM IncorporatedInventors: Fan Deng, Karthikeyan Shanmugavadivelu, Wan Shun Vincent Ma, Narayana Karthik Sadanandam Ravirala, Lei Ma, Pengjun Huang, Shizhong Liu
-
Patent number: 9800798Abstract: Systems, methods, and devices for power optimization in imaging devices having dual cameras are contained herein. In one aspect, a method for power optimization for a dual camera imaging device is disclosed. The method includes determining a zoom factor selection, determining whether the zoom factor selection falls within a first zoom factor range, a second zoom factor range, or a third zoom factor range, and sending a series of frames of an image captured by a first sensor or a series of frames of an image captured by a second sensor or both to a camera application based on the determined zoom factor section.Type: GrantFiled: February 13, 2015Date of Patent: October 24, 2017Assignee: QUALCOMM IncorporatedInventors: Narayana Karthik Ravirala, Shizhong Liu, Karthikeyan Shanmugavadivelu, Venkata Ravi Kiran Dayana
-
Publication number: 20170277018Abstract: Method and systems for autofocus triggering focusing are disclosed herein. In one example, a system may include a lens, a memory component configured to store lens parameters of the lens and regions of focus corresponding to the lens parameters, and a processor coupled to the memory and the lens. The processor may be configured to focus the lens on a target object at a first instance of time, receive information indicative of distances from an imaging device to the target object over a period of time, obtain lens parameters of the lens, and determine a region of focus, and trigger the lens to re-focus on the target object if the distance to the target object indicates the target object is outside of the region of focus and the distance to the target object is unchanged for a designated time period.Type: ApplicationFiled: June 9, 2017Publication date: September 28, 2017Inventors: Venkata Ravi Kiran Dayana, Karthikeyan Shanmugavadivelu, Shizhong Liu, Ying Chen Lou, Hung-Hsin Wu, Narayana Karthik Sadanandam Ravirala, Adarsh Abhay Golikeri
-
Publication number: 20170230585Abstract: Devices and methods for providing seamless preview images for multi-camera devices having two or more asymmetric cameras. A multi-camera device may include two asymmetric cameras disposed to image a target scene. The multi-camera device further includes a processor coupled to a memory component and a display, the processor configured to retrieve an image generated by a first camera from the memory component, retrieve an image generated by a second camera from the memory component, receive input corresponding to a preview zoom level, retrieve spatial transform information and photometric transform information from memory, modify at least one image received from the first and second cameras by the spatial transform and the photometric transform, and provide on the display a preview image comprising at least a portion of the at least one modified image and a portion of either the first image or the second image based on the preview zoom level.Type: ApplicationFiled: February 8, 2016Publication date: August 10, 2017Inventors: James Wilson Nash, Kalin Mitkov Atanassov, Sergiu Radu Goma, Narayana Karthik Sadanandam Ravirala, Venkata Ravi Kiran Dayana, Karthikeyan Shanmugavadivelu
-
Patent number: 9703175Abstract: Method and systems for autofocus triggering focusing are disclosed herein. In one example, a system may include a lens, a memory component configured to store lens parameters of the lens and regions of focus corresponding to the lens parameters, and a processor coupled to the memory and the lens. The processor may be configured to focus the lens on a target object at a first instance of time, receive information indicative of distances from an imaging device to the target object over a period of time, obtain lens parameters of the lens, and determine a region of focus, and trigger the lens to re-focus on the target object if the distance to the target object indicates the target object is outside of the region of focus and the distance to the target object is unchanged for a designated time period.Type: GrantFiled: July 2, 2015Date of Patent: July 11, 2017Assignee: QUALCOMM IncorporatedInventors: Venkata Ravi Kiran Dayana, Karthikeyan Shanmugavadivelu, Shizhong Liu, Ying Chen Lou, Hung-Hsin Wu, Narayana Karthik Sadanandam Ravirala, Adarsh Abhay Golikeri
-
Publication number: 20170118393Abstract: A method of performing an image autofocus operation using multiple cameras includes performing, at an image processor, a first autofocus operation on a first region of interest in a scene captured by a first camera and determining a second region of interest in the scene captured by a second camera. The second region of interest is determined based on the first region of interest. The method further includes performing a second autofocus operation on the second region of interest. The method also includes fusing a first image of the scene captured by the first camera with a second image of the scene captured by the second camera to generate a fused image. The first image is based on the first autofocus operation and the second image is based on the second autofocus operation.Type: ApplicationFiled: October 21, 2015Publication date: April 27, 2017Inventors: Venkata Ravi Kiran Dayana, Karthikeyan Shanmugavadivelu, Narayana Karthik Sadanandam Ravirala
-
Publication number: 20170003573Abstract: Method and systems for autofocus triggering focusing are disclosed herein. In one example, a system may include a lens, a memory component configured to store lens parameters of the lens and regions of focus corresponding to the lens parameters, and a processor coupled to the memory and the lens. The processor may be configured to focus the lens on a target object at a first instance of time, receive information indicative of distances from an imaging device to the target object over a period of time, obtain lens parameters of the lens, and determine a region of focus, and trigger the lens to re-focus on the target object if the distance to the target object indicates the target object is outside of the region of focus and the distance to the target object is unchanged for a designated time period.Type: ApplicationFiled: July 2, 2015Publication date: January 5, 2017Inventors: Venkata Ravi Kiran Dayana, Karthikeyan Shanmugavadivelu, Shizhong Liu, Ying Chen Lou, Hung-Hsin Wu, Narayana Karthik Sadanandam Ravirala, Adarsh Abhay Golikeri
-
Publication number: 20160295097Abstract: Exemplary embodiments are directed to dual camera autofocusing in digital cameras with error detection. An auxiliary lens and image sensor shares a housing with a main lens and image sensor which together act as a range finder to determine the distance to a scene. Scene distance is used in combination with contrast-detection autofocus to achieve maximum sharpness in the image. Errors in distance determination may be found and corrected using a comparison of data collected from the auxiliary lens and main lens.Type: ApplicationFiled: March 31, 2015Publication date: October 6, 2016Inventors: Karthikeyan Shanmugavadivelu, Hung-Hsin Wu, Shizhong Liu, Narayana Karthik Sadanandam Ravirala, Venkata Ravi Kiran Dayana, Adarsh Abhay Golikeri
-
Publication number: 20160241793Abstract: Systems, methods, and devices for power optimization in imaging devices having dual cameras are contained herein. In one aspect, a method for power optimization for a dual camera imaging device is disclosed. The method includes determining a zoom factor selection, determining whether the zoom factor selection falls within a first zoom factor range, a second zoom factor range, or a third zoom factor range, and sending a series of frames of an image captured by a first sensor or a series of frames of an image captured by a second sensor or both to a camera application based on the determined zoom factor section.Type: ApplicationFiled: February 13, 2015Publication date: August 18, 2016Inventors: Narayana Karthik Ravirala, Shizhong Liu, Karthikeyan Shanmugavadivelu, Venkata Ravi Kiran Dayana
-
Publication number: 20160227100Abstract: Systems and methods for rapid automatic focus, automatic white balance, and automatic exposure control are disclosed. To reduce the time it takes to automatically focus, balance spectra, and set exposure period, a dual camera uses an auxiliary camera and auxiliary image processing module in addition to the main camera and main image processing module. The auxiliary camera may capture lower resolution and lower frame rate imagery that is processed by an auxiliary image processing module to determine focus, white balance, and exposure periods for the main camera and main image processing module. By initiating convergence for automatic focus (AF), automatic white balance (AWB) and automatic exposure control (AEC) before receiving a command to capture imagery, and processing lower resolution and lower frame rate imagery, AF, AWB, and AEC convergence delays are reduced for both standard and high dynamic range image capture.Type: ApplicationFiled: January 29, 2015Publication date: August 4, 2016Inventors: Shizhong Liu, Hung-Hsin Wu, Venkata Ravi Kiran Dayana, Narayana Karthik Sadanandam Ravirala, Yonggui Mao, Karthikeyan Shanmugavadivelu
-
Publication number: 20160212410Abstract: Systems and methods of triggering an event based on meeting a certain depth criteria in an image. One innovation of a method includes a identifying at least one object in a field of view of an imaging device, the imaging device configured to capture at least one image of the field of view, determining a threshold depth level, measuring a depth of the at least one object within the field of view with respect to the imaging device, comparing the measured depth of the at least one object to the threshold depth level, and capturing an image of the object when the depth of the object within the field of view exceeds the threshold depth level.Type: ApplicationFiled: January 16, 2015Publication date: July 21, 2016Inventors: Shandon Campbell, Kalin Mitkov Atanassov, Karthikeyan Shanmugavadivelu, Stephen Michael Verrall
-
Patent number: 9392163Abstract: Described is a method and apparatus for unattended image capture that can identify subjects or faces within an image captured with an image sensor. The methods and apparatus may then score the image based, at least in part, on scores of detected subjects or faces in the image, scores of facial expressions, a focus score, exposure score, stability score, or audio score. If the score of the image is above a threshold, a snapshot image may be stored to a data store on the imaging device. In some aspects, if the score of the image is below a threshold, one or more audible prompts may be generated indicating that subjects should change positions, smile or remain more still during the image capture process.Type: GrantFiled: April 8, 2015Date of Patent: July 12, 2016Assignee: QUALCOMM IncorporatedInventors: Hung-Hsin Wu, Karthikeyan Shanmugavadivelu, Shizhong Liu, Wan Shun Vincent Ma, Adarsh Abhay Golikeri