Patents by Inventor Gerard Medioni

Gerard Medioni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11195318
    Abstract: A non-transitory, tangible, computer-readable storage medium may contain a program of instructions that cause a computer system running the program of instructions to automatically generate a 3D avatar of a living being, including automatically: causing one or more sensors to generate 3D data indicative of the three dimensional shape and appearance of at least a portion of the living being; and generating a virtual character based on the 3D data that can be animated and controlled.
    Type: Grant
    Filed: April 23, 2015
    Date of Patent: December 7, 2021
    Assignee: UNIVERSITY OF SOUTHERN CALIFORNIA
    Inventors: Evan Suma, Gerard Medioni, Mark Bolas, Ari Y. Shapiro, Wei-Wen Feng, Ruizhe Wang
  • Patent number: 11100661
    Abstract: A method for depth mapping includes receiving optical radiation reflected from multiple points on an object and processing the received optical radiation to generate depth data including multiple candidate depth coordinates for each of a plurality of pixels and respective measures of confidence associated with the candidate depth coordinates. One of the candidate depth coordinates is selected at each of the plurality of the pixels responsively to the respective measures of confidence. A depth map of the object is output, including the selected one of the candidate depth coordinates at each of the plurality of the pixels.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: August 24, 2021
    Assignee: APPLE INC.
    Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
  • Publication number: 20200219277
    Abstract: A method for depth mapping includes receiving optical radiation reflected from multiple points on an object and processing the received optical radiation to generate depth data including multiple candidate depth coordinates for each of a plurality of pixels and respective measures of confidence associated with the candidate depth coordinates. One of the candidate depth coordinates is selected at each of the plurality of the pixels responsively to the respective measures of confidence. A depth map of the object is output, including the selected one of the candidate depth coordinates at each of the plurality of the pixels.
    Type: Application
    Filed: March 16, 2020
    Publication date: July 9, 2020
    Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
  • Patent number: 10636155
    Abstract: A method for depth mapping includes acquiring first depth data with respect to an object using a first depth mapping technique and providing first candidate depth coordinates for a plurality of pixels, and acquiring second depth data with respect to the object using a second depth mapping technique, different from the first depth mapping technique, and providing second candidate depth coordinates for the plurality of pixels. A weighted voting process is applied to the first and second depth data in order to select one of the candidate depth coordinates at each pixel. A depth map of the object is output, including the selected one of the candidate depth coordinates at each pixel.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: April 28, 2020
    Assignee: APPLE INC.
    Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
  • Publication number: 20190087969
    Abstract: A method for depth mapping includes acquiring first depth data with respect to an object using a first depth mapping technique and providing first candidate depth coordinates for a plurality of pixels, and acquiring second depth data with respect to the object using a second depth mapping technique, different from the first depth mapping technique, and providing second candidate depth coordinates for the plurality of pixels. A weighted voting process is applied to the first and second depth data in order to select one of the candidate depth coordinates at each pixel. A depth map of the object is output, including the selected one of the candidate depth coordinates at each pixel.
    Type: Application
    Filed: November 5, 2018
    Publication date: March 21, 2019
    Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
  • Patent number: 10152801
    Abstract: A method for depth mapping includes projecting a pattern of optical radiation onto an object. A first image of the pattern on the object is captured using a first image sensor, and this image is processed to generate pattern-based depth data with respect to the object. A second image of the object is captured using a second image sensor, and the second image is processed together with another image to generate stereoscopic depth data with respect to the object. The pattern-based depth data is combined with the stereoscopic depth data to create a depth map of the object.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: December 11, 2018
    Assignee: APPLE INC.
    Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
  • Patent number: 9582889
    Abstract: A method for depth mapping includes projecting a pattern of optical radiation onto an object. A first image of the pattern on the object is captured using a first image sensor, and this image is processed to generate pattern-based depth data with respect to the object. A second image of the object is captured using a second image sensor, and the second image is processed together with another image to generate stereoscopic depth data with respect to the object. The pattern-based depth data is combined with the stereoscopic depth data to create a depth map of the object.
    Type: Grant
    Filed: July 28, 2010
    Date of Patent: February 28, 2017
    Assignee: APPLE INC.
    Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
  • Publication number: 20170011524
    Abstract: A method for depth mapping includes projecting a pattern of optical radiation onto an object. A first image of the pattern on the object is captured using a first image sensor, and this image is processed to generate pattern-based depth data with respect to the object. A second image of the object is captured using a second image sensor, and the second image is processed together with another image to generate stereoscopic depth data with respect to the object. The pattern-based depth data is combined with the stereoscopic depth data to create a depth map of the object.
    Type: Application
    Filed: September 21, 2016
    Publication date: January 12, 2017
    Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
  • Patent number: 9483703
    Abstract: A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point.
    Type: Grant
    Filed: May 14, 2014
    Date of Patent: November 1, 2016
    Assignee: UNIVERSITY OF SOUTHERN CALIFORNIA
    Inventors: Gerard Medioni, Zhuoliang Kang
  • Publication number: 20150356767
    Abstract: A non-transitory, tangible, computer-readable storage medium may contain a program of instructions that cause a computer system running the program of instructions to automatically generate a 3D avatar of a living being, including automatically: causing one or more sensors to generate 3D data indicative of the three dimensional shape and appearance of at least a portion of the living being; and generating a virtual character based on the 3D data that can be animated and controlled.
    Type: Application
    Filed: April 23, 2015
    Publication date: December 10, 2015
    Applicant: UNIVERSITY OF SOUTHERN CALIFORNIA
    Inventors: Evan Suma, Gerard Medioni, Mark Bolas, Ari Y. Shapiro, Wei-Wen Feng, Ruizhe Wang
  • Publication number: 20140340489
    Abstract: A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point.
    Type: Application
    Filed: May 14, 2014
    Publication date: November 20, 2014
    Applicant: UNIVERSITY OF SOUTHERN CALIFORNIA
    Inventors: Gerard Medioni, Zhuolinag Kang
  • Publication number: 20130131985
    Abstract: The system comprises a wearable, electronic image acquisition and processing system (or visual enhancement system) to guide visually impaired individuals through their environment, providing information to the user about nearby objects of interest, potentially dangerous obstacles, their location, and potential paths to their destination.
    Type: Application
    Filed: April 11, 2012
    Publication date: May 23, 2013
    Inventors: James D. Weiland, Mark S. Humayan, Gerard Medioni, Armand R. Tanguay, JR., Vivek Pradeep, Laurent Itti
  • Patent number: 8446468
    Abstract: Systems and methods for moving object detection using a mobile infrared camera are described. The methods include receiving multiple frames, each frame including an image of at least a portion of a planar surface, stabilizing two consecutive frames of the plurality of frames, the stabilizing comprising determining a transformation mapping a succeeding frame of the two consecutive frames to a preceding frame of the two consecutive frames and based on the transformation, warping the two consecutive frames to a reference frame of the plurality of frames, and detecting a movement of an object in the two consecutive frames, the movement based on a change in positions of the object in the preceding frame and the successive frame.
    Type: Grant
    Filed: June 19, 2008
    Date of Patent: May 21, 2013
    Assignee: University of Southern California
    Inventors: Gerard Medioni, Cheng-Hua Pai, Yuping Lin
  • Patent number: 8401276
    Abstract: Generating three-dimensional information can include accessing multiple different images of an object taken by one or more cameras; selecting one of the accessed images as a reference image; identifying corresponding features between the reference image and one or more different ones of the accessed images; determining first camera pose information for each accessed image based on one or more of the corresponding features, each first camera pose information indicative of a relationship between an imaging device and the object; determining a first three-dimensional structure of the object based on first camera pose information of two of the accessed images; and generating a second three-dimensional structure and a second camera pose information for each accessed image based on the first three-dimensional structure and the first camera pose information for each accessed image.
    Type: Grant
    Filed: May 20, 2009
    Date of Patent: March 19, 2013
    Assignee: University of Southern California
    Inventors: Tae Eun Choe, Gerard Medioni
  • Patent number: 8391548
    Abstract: Tracking multiple targets can include making different observations based on multiple different frames of one or more digital video feeds, determining an initial cover based on the observations, performing one or more modifications to the initial cover to generate a final cover, and using the final cover to track multiple targets in the one or more digital video feeds. Performing one or more modifications to generate a final cover can include selecting one or more adjustments from a group that includes temporal cover adjustments and spatial cover adjustments, and can include using likelihood information indicative of similarities in motion and appearance to distinguish different targets in the frames.
    Type: Grant
    Filed: May 21, 2009
    Date of Patent: March 5, 2013
    Assignee: University of Southern California
    Inventors: Gerard Medioni, Qian Yu
  • Patent number: 8351649
    Abstract: Technologies for object tracking can include accessing a video feed that captures an object in at least a portion of the video feed; operating a generative tracker to capture appearance variations of the object operating a discriminative tracker to discriminate the object from the object's background, where operating the discriminative tracker can include using a sliding window to process data from the video feed, and advancing the sliding window to focus the discriminative tracker on recent appearance variations of the object; training the generative tracker and the discriminative tracker based on the video feed, where the training can include updating the generative tracker based on an output of the discriminative tracker, and updating the discriminative tracker based on an output of the generative tracker; and tracking the object with information based on an output from the generative tracker and an output from the discriminative tracker.
    Type: Grant
    Filed: April 1, 2009
    Date of Patent: January 8, 2013
    Assignee: University of Southern California
    Inventors: Gerard Medioni, Qian Yu, Thang Ba Dinh
  • Publication number: 20120321128
    Abstract: Technologies for object tracking can include accessing a video feed that captures an object in at least a portion of the video feed; operating a generative tracker to capture appearance variations of the object operating a discriminative tracker to discriminate the object from the object's background, where operating the discriminative tracker can include using a sliding window to process data from the video feed, and advancing the sliding window to focus the discriminative tracker on recent appearance variations of the object; training the generative tracker and the discriminative tracker based on the video feed, where the training can include updating the generative tracker based on an output of the discriminative tracker, and updating the discriminative tracker based on an output of the generative tracker; and tracking the object with information based on an output from the generative tracker and an output from the discriminative tracker.
    Type: Application
    Filed: April 1, 2009
    Publication date: December 20, 2012
    Inventors: GERARD MEDIONI, QIAN YU, THANG BA DINH
  • Patent number: 8126261
    Abstract: A 3D face reconstruction technique using 2D images, such as photographs of a face, is described. Prior face knowledge or a generic face is used to extract sparse 3D information from the images and to identify image pairs. Bundle adjustment is carried out to determine more accurate 3D camera positions, image pairs are rectified, and dense 3D face information is extracted without using the prior face knowledge. Outliers are removed, e.g., by using tensor voting. A 3D surface is extracted from the dense 3D information and surface detail is extracted from the images.
    Type: Grant
    Filed: July 25, 2007
    Date of Patent: February 28, 2012
    Assignee: University of Southern California
    Inventors: Gerard Medioni, Douglas Fidaleo
  • Patent number: 8073196
    Abstract: Among other things, methods, systems and computer program products are described for detecting and tracking a moving object in a scene. One or more residual pixels are identified from video data. At least two geometric constraints are applied to the identified one or more residual pixels. A disparity of the one or more residual pixels to the applied at least two geometric constraints is calculated. Based on the detected disparity, the one or more residual pixels are classified as belonging to parallax or independent motion and the parallax classified residual pixels are filtered. Further, a moving object is tracked in the video data. Tracking the object includes representing the detected disparity in probabilistic likelihood models. Tracking the object also includes accumulating the probabilistic likelihood models within a number of frames during the parallax filtering.
    Type: Grant
    Filed: October 16, 2007
    Date of Patent: December 6, 2011
    Assignee: University of Southern California
    Inventors: Chang Yuan, Gerard Medioni, Jinman Kang, Isaac Cohen
  • Patent number: 7953675
    Abstract: A tensor voting scheme which can be used in an arbitrary number N of dimensions, up to several hundreds. The voting scheme can operate on unorganized point inputs, which can be oriented or unoriented, and estimate the intrinsic dimensionality at each point. Moreover it can estimate the tangent and normal space of a manifold passing through each point based solely on local operations.
    Type: Grant
    Filed: June 29, 2006
    Date of Patent: May 31, 2011
    Inventors: Gerard Medioni, Philippos Mordohai