Patents by Inventor Gerard Medioni
Gerard Medioni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11195318Abstract: A non-transitory, tangible, computer-readable storage medium may contain a program of instructions that cause a computer system running the program of instructions to automatically generate a 3D avatar of a living being, including automatically: causing one or more sensors to generate 3D data indicative of the three dimensional shape and appearance of at least a portion of the living being; and generating a virtual character based on the 3D data that can be animated and controlled.Type: GrantFiled: April 23, 2015Date of Patent: December 7, 2021Assignee: UNIVERSITY OF SOUTHERN CALIFORNIAInventors: Evan Suma, Gerard Medioni, Mark Bolas, Ari Y. Shapiro, Wei-Wen Feng, Ruizhe Wang
-
Patent number: 11100661Abstract: A method for depth mapping includes receiving optical radiation reflected from multiple points on an object and processing the received optical radiation to generate depth data including multiple candidate depth coordinates for each of a plurality of pixels and respective measures of confidence associated with the candidate depth coordinates. One of the candidate depth coordinates is selected at each of the plurality of the pixels responsively to the respective measures of confidence. A depth map of the object is output, including the selected one of the candidate depth coordinates at each of the plurality of the pixels.Type: GrantFiled: March 16, 2020Date of Patent: August 24, 2021Assignee: APPLE INC.Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
-
Publication number: 20200219277Abstract: A method for depth mapping includes receiving optical radiation reflected from multiple points on an object and processing the received optical radiation to generate depth data including multiple candidate depth coordinates for each of a plurality of pixels and respective measures of confidence associated with the candidate depth coordinates. One of the candidate depth coordinates is selected at each of the plurality of the pixels responsively to the respective measures of confidence. A depth map of the object is output, including the selected one of the candidate depth coordinates at each of the plurality of the pixels.Type: ApplicationFiled: March 16, 2020Publication date: July 9, 2020Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
-
Patent number: 10636155Abstract: A method for depth mapping includes acquiring first depth data with respect to an object using a first depth mapping technique and providing first candidate depth coordinates for a plurality of pixels, and acquiring second depth data with respect to the object using a second depth mapping technique, different from the first depth mapping technique, and providing second candidate depth coordinates for the plurality of pixels. A weighted voting process is applied to the first and second depth data in order to select one of the candidate depth coordinates at each pixel. A depth map of the object is output, including the selected one of the candidate depth coordinates at each pixel.Type: GrantFiled: November 5, 2018Date of Patent: April 28, 2020Assignee: APPLE INC.Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
-
Publication number: 20190087969Abstract: A method for depth mapping includes acquiring first depth data with respect to an object using a first depth mapping technique and providing first candidate depth coordinates for a plurality of pixels, and acquiring second depth data with respect to the object using a second depth mapping technique, different from the first depth mapping technique, and providing second candidate depth coordinates for the plurality of pixels. A weighted voting process is applied to the first and second depth data in order to select one of the candidate depth coordinates at each pixel. A depth map of the object is output, including the selected one of the candidate depth coordinates at each pixel.Type: ApplicationFiled: November 5, 2018Publication date: March 21, 2019Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
-
Patent number: 10152801Abstract: A method for depth mapping includes projecting a pattern of optical radiation onto an object. A first image of the pattern on the object is captured using a first image sensor, and this image is processed to generate pattern-based depth data with respect to the object. A second image of the object is captured using a second image sensor, and the second image is processed together with another image to generate stereoscopic depth data with respect to the object. The pattern-based depth data is combined with the stereoscopic depth data to create a depth map of the object.Type: GrantFiled: September 21, 2016Date of Patent: December 11, 2018Assignee: APPLE INC.Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
-
Patent number: 9582889Abstract: A method for depth mapping includes projecting a pattern of optical radiation onto an object. A first image of the pattern on the object is captured using a first image sensor, and this image is processed to generate pattern-based depth data with respect to the object. A second image of the object is captured using a second image sensor, and the second image is processed together with another image to generate stereoscopic depth data with respect to the object. The pattern-based depth data is combined with the stereoscopic depth data to create a depth map of the object.Type: GrantFiled: July 28, 2010Date of Patent: February 28, 2017Assignee: APPLE INC.Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
-
Publication number: 20170011524Abstract: A method for depth mapping includes projecting a pattern of optical radiation onto an object. A first image of the pattern on the object is captured using a first image sensor, and this image is processed to generate pattern-based depth data with respect to the object. A second image of the object is captured using a second image sensor, and the second image is processed together with another image to generate stereoscopic depth data with respect to the object. The pattern-based depth data is combined with the stereoscopic depth data to create a depth map of the object.Type: ApplicationFiled: September 21, 2016Publication date: January 12, 2017Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
-
Patent number: 9483703Abstract: A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point.Type: GrantFiled: May 14, 2014Date of Patent: November 1, 2016Assignee: UNIVERSITY OF SOUTHERN CALIFORNIAInventors: Gerard Medioni, Zhuoliang Kang
-
Publication number: 20150356767Abstract: A non-transitory, tangible, computer-readable storage medium may contain a program of instructions that cause a computer system running the program of instructions to automatically generate a 3D avatar of a living being, including automatically: causing one or more sensors to generate 3D data indicative of the three dimensional shape and appearance of at least a portion of the living being; and generating a virtual character based on the 3D data that can be animated and controlled.Type: ApplicationFiled: April 23, 2015Publication date: December 10, 2015Applicant: UNIVERSITY OF SOUTHERN CALIFORNIAInventors: Evan Suma, Gerard Medioni, Mark Bolas, Ari Y. Shapiro, Wei-Wen Feng, Ruizhe Wang
-
Publication number: 20140340489Abstract: A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point.Type: ApplicationFiled: May 14, 2014Publication date: November 20, 2014Applicant: UNIVERSITY OF SOUTHERN CALIFORNIAInventors: Gerard Medioni, Zhuolinag Kang
-
Publication number: 20130131985Abstract: The system comprises a wearable, electronic image acquisition and processing system (or visual enhancement system) to guide visually impaired individuals through their environment, providing information to the user about nearby objects of interest, potentially dangerous obstacles, their location, and potential paths to their destination.Type: ApplicationFiled: April 11, 2012Publication date: May 23, 2013Inventors: James D. Weiland, Mark S. Humayan, Gerard Medioni, Armand R. Tanguay, JR., Vivek Pradeep, Laurent Itti
-
Patent number: 8446468Abstract: Systems and methods for moving object detection using a mobile infrared camera are described. The methods include receiving multiple frames, each frame including an image of at least a portion of a planar surface, stabilizing two consecutive frames of the plurality of frames, the stabilizing comprising determining a transformation mapping a succeeding frame of the two consecutive frames to a preceding frame of the two consecutive frames and based on the transformation, warping the two consecutive frames to a reference frame of the plurality of frames, and detecting a movement of an object in the two consecutive frames, the movement based on a change in positions of the object in the preceding frame and the successive frame.Type: GrantFiled: June 19, 2008Date of Patent: May 21, 2013Assignee: University of Southern CaliforniaInventors: Gerard Medioni, Cheng-Hua Pai, Yuping Lin
-
Patent number: 8401276Abstract: Generating three-dimensional information can include accessing multiple different images of an object taken by one or more cameras; selecting one of the accessed images as a reference image; identifying corresponding features between the reference image and one or more different ones of the accessed images; determining first camera pose information for each accessed image based on one or more of the corresponding features, each first camera pose information indicative of a relationship between an imaging device and the object; determining a first three-dimensional structure of the object based on first camera pose information of two of the accessed images; and generating a second three-dimensional structure and a second camera pose information for each accessed image based on the first three-dimensional structure and the first camera pose information for each accessed image.Type: GrantFiled: May 20, 2009Date of Patent: March 19, 2013Assignee: University of Southern CaliforniaInventors: Tae Eun Choe, Gerard Medioni
-
Patent number: 8391548Abstract: Tracking multiple targets can include making different observations based on multiple different frames of one or more digital video feeds, determining an initial cover based on the observations, performing one or more modifications to the initial cover to generate a final cover, and using the final cover to track multiple targets in the one or more digital video feeds. Performing one or more modifications to generate a final cover can include selecting one or more adjustments from a group that includes temporal cover adjustments and spatial cover adjustments, and can include using likelihood information indicative of similarities in motion and appearance to distinguish different targets in the frames.Type: GrantFiled: May 21, 2009Date of Patent: March 5, 2013Assignee: University of Southern CaliforniaInventors: Gerard Medioni, Qian Yu
-
Patent number: 8351649Abstract: Technologies for object tracking can include accessing a video feed that captures an object in at least a portion of the video feed; operating a generative tracker to capture appearance variations of the object operating a discriminative tracker to discriminate the object from the object's background, where operating the discriminative tracker can include using a sliding window to process data from the video feed, and advancing the sliding window to focus the discriminative tracker on recent appearance variations of the object; training the generative tracker and the discriminative tracker based on the video feed, where the training can include updating the generative tracker based on an output of the discriminative tracker, and updating the discriminative tracker based on an output of the generative tracker; and tracking the object with information based on an output from the generative tracker and an output from the discriminative tracker.Type: GrantFiled: April 1, 2009Date of Patent: January 8, 2013Assignee: University of Southern CaliforniaInventors: Gerard Medioni, Qian Yu, Thang Ba Dinh
-
Publication number: 20120321128Abstract: Technologies for object tracking can include accessing a video feed that captures an object in at least a portion of the video feed; operating a generative tracker to capture appearance variations of the object operating a discriminative tracker to discriminate the object from the object's background, where operating the discriminative tracker can include using a sliding window to process data from the video feed, and advancing the sliding window to focus the discriminative tracker on recent appearance variations of the object; training the generative tracker and the discriminative tracker based on the video feed, where the training can include updating the generative tracker based on an output of the discriminative tracker, and updating the discriminative tracker based on an output of the generative tracker; and tracking the object with information based on an output from the generative tracker and an output from the discriminative tracker.Type: ApplicationFiled: April 1, 2009Publication date: December 20, 2012Inventors: GERARD MEDIONI, QIAN YU, THANG BA DINH
-
Patent number: 8126261Abstract: A 3D face reconstruction technique using 2D images, such as photographs of a face, is described. Prior face knowledge or a generic face is used to extract sparse 3D information from the images and to identify image pairs. Bundle adjustment is carried out to determine more accurate 3D camera positions, image pairs are rectified, and dense 3D face information is extracted without using the prior face knowledge. Outliers are removed, e.g., by using tensor voting. A 3D surface is extracted from the dense 3D information and surface detail is extracted from the images.Type: GrantFiled: July 25, 2007Date of Patent: February 28, 2012Assignee: University of Southern CaliforniaInventors: Gerard Medioni, Douglas Fidaleo
-
Patent number: 8073196Abstract: Among other things, methods, systems and computer program products are described for detecting and tracking a moving object in a scene. One or more residual pixels are identified from video data. At least two geometric constraints are applied to the identified one or more residual pixels. A disparity of the one or more residual pixels to the applied at least two geometric constraints is calculated. Based on the detected disparity, the one or more residual pixels are classified as belonging to parallax or independent motion and the parallax classified residual pixels are filtered. Further, a moving object is tracked in the video data. Tracking the object includes representing the detected disparity in probabilistic likelihood models. Tracking the object also includes accumulating the probabilistic likelihood models within a number of frames during the parallax filtering.Type: GrantFiled: October 16, 2007Date of Patent: December 6, 2011Assignee: University of Southern CaliforniaInventors: Chang Yuan, Gerard Medioni, Jinman Kang, Isaac Cohen
-
Patent number: 7953675Abstract: A tensor voting scheme which can be used in an arbitrary number N of dimensions, up to several hundreds. The voting scheme can operate on unorganized point inputs, which can be oriented or unoriented, and estimate the intrinsic dimensionality at each point. Moreover it can estimate the tangent and normal space of a manifold passing through each point based solely on local operations.Type: GrantFiled: June 29, 2006Date of Patent: May 31, 2011Inventors: Gerard Medioni, Philippos Mordohai