Patents by Inventor Kartik Venkataraman

Kartik Venkataraman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230169665
    Abstract: Systems and methods for calibrating an imaging device. A processing circuit identifies first timestamps of one or more events in one or more first image frames of a first imaging device, and identifies one or more corresponding events in one or more second image frames. Second timestamps may be determined for the corresponding events. A calibration value may be calculated based on a difference between the first and second timestamps. In one embodiment, the processing circuit also identifies a first plurality of events in a first sequence of image frames transmitted by a first imaging device, and selects a second plurality of events in a second sequence of image frames by a second imaging device that minimizes discrepancy between the first plurality of events and the second plurality of events. A calibration value is computed based on the timestamps of the matched events.
    Type: Application
    Filed: December 1, 2021
    Publication date: June 1, 2023
    Inventors: Vage TAAMAZYAN, Agastya KALRA, Achuta KADAMBI, Kartik VENKATARAMAN
  • Publication number: 20230152087
    Abstract: Systems and methods for estimating depth from projected texture using camera arrays are described. A camera array includes a conventional camera and at least one two-dimensional array of cameras, where the conventional camera has a higher resolution than the cameras in the at least one two-dimensional array of cameras, an illumination system configured to illuminate a scene with a projected texture, where an image processing pipeline application directs the processor to: utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture, capture a set of images of the scene illuminated with the projected texture, and determining depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the set of images.
    Type: Application
    Filed: October 31, 2022
    Publication date: May 18, 2023
    Applicant: Adeia Imaging LLC
    Inventors: Kartik Venkataraman, Jacques Duparré
  • Publication number: 20230154143
    Abstract: A polarized event camera system includes: an event camera having a field of view centered around an optical axis, the event camera including an image sensor including a plurality of pixels, each pixel of the event camera operating independently and asynchronously and being configured to generate change events based on changes in intensity of light received by the pixel; and a rotatable linear polarizer aligned with the optical axis of the event camera, the rotatable linear polarizer having a polarization axis, the polarization axis of the rotatable linear polarizer being rotatable about the optical axis of the event camera.
    Type: Application
    Filed: November 16, 2021
    Publication date: May 18, 2023
    Inventors: Vage TAAMAZYAN, Agastya KALRA, Achuta KADAMBI, Kartik VENKATARAMAN
  • Patent number: 11615546
    Abstract: Systems and methods for depth estimation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for estimating depth from images. The method includes steps for receiving a plurality of source images captured from a plurality of different viewpoints using a processing system configured by an image processing application, generating a target image from a target viewpoint that is different to the viewpoints of the plurality of source images based upon a set of generative model parameters using the processing system configured by the image processing application, and identifying depth information of at least one output image based on the predicted target image using the processing system configured by the image processing application.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: March 28, 2023
    Assignee: Adeia Imaging LLC
    Inventor: Kartik Venkataraman
  • Publication number: 20230084807
    Abstract: Aspects of embodiments of the present disclosure relate to systems and methods for performing three-dimensional reconstruction (or depth reconstruction) and for generating segmentation masks using data captured by one or more event cameras.
    Type: Application
    Filed: September 16, 2021
    Publication date: March 16, 2023
    Inventors: Vage TAAMAZYAN, Agastya KALRA, Achuta KADAMBI, Kartik VENKATARAMAN
  • Publication number: 20230071384
    Abstract: A method of tracking a pose of an object includes determining an initial pose of the object at a first position, receiving position data and velocity data corresponding to movement of the object to a second position by a moving device, determining an expected pose of the object at the second position based on the position and velocity data and the initial pose, receiving second image data corresponding to the object at the second position from a camera, and determining a refined pose of the object at the second position based on the second image data and the expected pose.
    Type: Application
    Filed: September 9, 2021
    Publication date: March 9, 2023
    Inventors: Rishav Agarwal, Achuta Kadambi, Agastya Kalra, Kartik Venkataraman
  • Patent number: 11580667
    Abstract: A method for characterizing a pose estimation system includes: receiving, from a pose estimation system, first poses of an arrangement of objects in a first scene; receiving, from the pose estimation system, second poses of the arrangement of objects in a second scene, the second scene being a rigid transformation of the arrangement of objects of the first scene with respect to the pose estimation system; computing a coarse scene transformation between the first scene and the second scene; matching corresponding poses between the first poses and the second poses; computing a refined scene transformation between the first scene and the second scene based on coarse scene transformation, the first poses, and the second poses; transforming the first poses based on the refined scene transformation to compute transformed first poses; and computing an average rotation error and an average translation error of the pose estimation system based on differences between the transformed first poses and the second poses.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: February 14, 2023
    Assignee: Intrinsic Innovation LLC
    Inventors: Agastya Kalra, Achuta Kadambi, Kartik Venkataraman
  • Publication number: 20230041560
    Abstract: A data capture stage includes a frame at least partially surrounding a target object, a rotation device within the frame and configured to selectively rotate the target object, a plurality of cameras coupled to the frame and configured to capture images of the target object from different angles, a sensor coupled to the frame and configured to sense mapping data corresponding to the target object, and an augmentation data generator configured to control a rotation of the rotation device, to control operations of the plurality of cameras and the sensor, and to generate training data based on the images and the mapping data.
    Type: Application
    Filed: August 3, 2021
    Publication date: February 9, 2023
    Inventors: Agastya Kalra, Rishav Agarwal, Achuta Kadambi, Kartik Venkataraman, Anton Boykov
  • Patent number: 11562498
    Abstract: Systems and methods for hybrid depth regularization in accordance with various embodiments of the invention are disclosed. In one embodiment of the invention, a depth sensing system comprises a plurality of cameras; a processor; and a memory containing an image processing application. The image processing application may direct the processor to obtain image data for a plurality of images from multiple viewpoints, the image data comprising a reference image and at least one alternate view image; generate a raw depth map using a first depth estimation process, and a confidence map; and generate a regularized depth map. The regularized depth map may be generated by computing a secondary depth map using a second different depth estimation process; and computing a composite depth map by selecting depth estimates from the raw depth map and the secondary depth map based on the confidence map.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: January 24, 2023
    Assignee: Adela Imaging LLC
    Inventors: Ankit Jain, Priyam Chatterjee, Kartik Venkataraman
  • Publication number: 20230007223
    Abstract: Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.
    Type: Application
    Filed: June 17, 2022
    Publication date: January 5, 2023
    Applicant: FotoNation Limited
    Inventors: Kartik Venkataraman, Dan Lelescu, Jacques Duparre
  • Publication number: 20230007161
    Abstract: According to one embodiment of the present disclosure, an imaging system includes: an image sensor including a plurality of subpixels grouped into a plurality of pixels; a polarization system including: a rotatable linear polarizer; and a polarizer mask including a plurality of polarizer filters, the polarizer filters being aligned with corresponding ones of the subpixels, the subpixels of a pixel of the plurality of pixels being located behind polarizer filters at different angles of linear polarization; and imaging optics configured to focus light from a scene onto the image sensor.
    Type: Application
    Filed: July 1, 2021
    Publication date: January 5, 2023
    Inventors: Vage TAAMAZYAN, Achuta KADAMBI, Kartik VENKATARAMAN
  • Publication number: 20220410381
    Abstract: A method for controlling a robotic system includes: capturing, by an imaging system, one or more images of a scene; computing, by a processing circuit including a processor and memory, one or more instance segmentation masks based on the one or more images, the one or more instance segmentation masks detecting one or more objects in the scene; computing, by the processing circuit, one or more pickability scores for the one or more objects; selecting, by the processing circuit, an object among the one or more objects based on the one or more pickability scores; computing, by the processing circuit, an object picking plan for the selected object; and outputting, by the processing circuit, the object picking plan to a controller configured to control an end effector of a robotic arm to pick the selected object.
    Type: Application
    Filed: June 29, 2021
    Publication date: December 29, 2022
    Inventors: Guy Michael STOPPI, Agastya KALRA, Kartik VENKATARAMAN, Achuta KADAMBI
  • Publication number: 20220414829
    Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.
    Type: Application
    Filed: August 22, 2022
    Publication date: December 29, 2022
    Applicant: FotoNation Limited
    Inventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
  • Publication number: 20220414928
    Abstract: A system for collecting data for training a computer vision model for shape estimation includes: an imaging system configured to capture one or more images; and a processing system including a processor and memory storing instructions that, when executed by the processor, cause the processor to: receive one or more input images from the imaging system; estimate a pose of an object depicted in the one or more images; render a shape estimate from a 3-D model of the object posed in accordance with the pose of the object; and generate a data point of a training dataset, the data point including one or more images based on the one or more input images and a label corresponding to the one or more images, the label including the shape estimate.
    Type: Application
    Filed: June 25, 2021
    Publication date: December 29, 2022
    Inventors: Kartik VENKATARAMAN, Agastya KALRA, Achuta KADAMBI, Ramesh RASKAR
  • Publication number: 20220405506
    Abstract: Systems and method for an object from a plurality of objects are disclosed. An image of a scene containing the plurality of objects is obtained, and a segmentation map is generated for the objects in the scene. The shapes of the objects are determined based on the segmentation map. An end effector is adjusted in response to determining the shapes of the objects. The adjusting the end effector includes shaping the end effector according to at least one of the shapes of the objects. The plurality of objects is approached in response to the shaping of the end effector, and one of the plurality of objects is picked with the end effector.
    Type: Application
    Filed: June 22, 2021
    Publication date: December 22, 2022
    Inventors: Vage TAAMAZYAN, Kartik VENKATARAMAN, Agastya KALRA, Achuta KADAMBI
  • Patent number: 11525906
    Abstract: A multi-modal sensor system includes: an underlying sensor system; a polarization camera system configured to capture polarization raw frames corresponding to a plurality of different polarization states; and a processing system including a processor and memory, the processing system being configured to control the underlying sensor system and the polarization camera system, the memory storing instructions that, when executed by the processor, cause the processor to: control the underlying sensor system to perform sensing on a scene and the polarization camera system to capture a plurality of polarization raw frames of the scene; extract first tensors in polarization representation spaces based on the plurality of polarization raw frames; and compute a characterization output based on an output of the underlying sensor system and the first tensors in polarization representation spaces.
    Type: Grant
    Filed: October 7, 2020
    Date of Patent: December 13, 2022
    Assignee: Intrinsic Innovation LLC
    Inventors: Achuta Kadambi, Ramesh Raskar, Kartik Venkataraman, Supreeth Krishna Rao, Agastya Kalra
  • Publication number: 20220385848
    Abstract: Systems and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed. An imaging device in accordance with one embodiment of the invention includes at least one imager array, and each imager in the array comprises a plurality of light sensing elements and a lens stack including at least one lens surface, where the lens stack is configured to form an image on the light sensing elements, control circuitry configured to capture images formed on the light sensing elements of each of the imagers, and a super-resolution processing module configured to generate at least one higher resolution super-resolved image using a plurality of the captured images.
    Type: Application
    Filed: August 5, 2022
    Publication date: December 1, 2022
    Applicant: FotoNation Limited
    Inventors: Kartik Venkataraman, Amandeep S. Jabbi, Robert H. Mullis, Jacques Duparre, Shane Ching-Feng Hu
  • Publication number: 20220375125
    Abstract: A method for estimating a pose of an object includes: receiving, by a processor, an observed image depicting the object from a viewpoint; computing, by the processor, an instance segmentation map identifying a class of the object depicted in the observed image; loading, by the processor, a 3-D model corresponding to the class of the object; computing, by the processor, a rendered image of the 3-D model in accordance with an initial pose estimate of the object and the viewpoint of the observed image; computing, by the processor, a plurality of dense image-to-object correspondences between the observed image of the object and the 3-D model based on the observed image and the rendered image; and computing, by the processor, the pose of the object based on the dense image-to-object correspondences.
    Type: Application
    Filed: May 7, 2021
    Publication date: November 24, 2022
    Inventors: Vage TAAMAZYAN, Guy Michael STOPPI, Bradley Craig Anderson BROWN, Agastya KALRA, Achuta KADAMBI, Kartik VENKATARAMAN
  • Patent number: 11486698
    Abstract: Systems and methods for estimating depth from projected texture using camera arrays are described. A camera array includes a conventional camera and at least one two-dimensional array of cameras, where the conventional camera has a higher resolution than the cameras in the at least one two-dimensional array of cameras, an illumination system configured to illuminate a scene with a projected texture, where an image processing pipeline application directs the processor to: utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture, capture a set of images of the scene illuminated with the projected texture, and determining depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the set of images.
    Type: Grant
    Filed: September 7, 2020
    Date of Patent: November 1, 2022
    Inventors: Kartik Venkataraman, Jacques Duparré
  • Publication number: 20220343537
    Abstract: A method for estimating a pose of a deformable object includes: receiving, by a processor, a plurality of images depicting the deformable object from multiple viewpoints; computing, by the processor, one or more object-level correspondences and a class of the deformable object depicted in the images; loading, by the processor, a 3-D model corresponding to the class of the deformable object; aligning, by the processor, the 3-D model to the deformable object depicted in the plurality of images to compute a six-degree of freedom (6-DoF) pose of the object; and outputting, by the processor, the 3-D model and the 6-DoF pose of the object.
    Type: Application
    Filed: April 15, 2021
    Publication date: October 27, 2022
    Inventors: Vage TAAMAZYAN, Agastya KALRA, Kartik VENKATARAMAN, Achuta KADAMBI