Patents by Inventor Shmuel Peleg
Shmuel Peleg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8249394Abstract: Natural looking output images are computed from input images based on given user constraints. Pixels in the output images are assigned a shift such that the respective output pixel value is derived from the value of the input pixel whose location is related to that of the output pixel by the shift, at least one shift being non-zero. The shift is determined by an optimization process adapted to minimize a cost function that includes a data term on the shifts of single pixels and a smoothness term on the shifts of pixel pairs. The output image is computed by applying the optimized shift-map between the input and output pixels. The data term can include shift constraints that limit the location in the output images of selected input pixels, and saliency constraints, indicating a preference that selected pixels in the input images will or will not appear in the output image.Type: GrantFiled: March 12, 2010Date of Patent: August 21, 2012Inventors: Shmuel Peleg, Yael Pritch, Eitam Kav Venaki
-
Publication number: 20120092446Abstract: A computer-implemented method and system transforms a first sequence of video frames of a first dynamic scene to a second sequence of at least two video frames depicting a second dynamic scene. A subset of video frames in the first sequence is obtained that show movement of at least one object having a plurality of pixels located at respective x, y coordinates and portions from the subset are selected that show non-spatially overlapping appearances of the at least one object in the first dynamic scene. The portions are copied from at least three different input frames to at least two successive frames of the second sequence without changing the respective x, y coordinates of the pixels in the object and such that at least one of the frames of the second sequence contains at least two portions that appear at different frames in the first sequence.Type: ApplicationFiled: December 20, 2011Publication date: April 19, 2012Applicant: YISSUM Research Development Company of the Hebrew University of Jerusalem LTD.Inventors: Shmuel PELEG, Alexander RAV-ACHA
-
Patent number: 8102406Abstract: A computer-implemented method and system transforms a first sequence of video frames of a first dynamic scene to a second sequence of at least two video frames depicting a second dynamic scene. A subset of video frames in the first sequence is obtained that show movement of at least one object having a plurality of pixels located at respective x, y coordinates and portions from the subset are selected that show non-spatially overlapping appearances of the at least one object in the first dynamic scene. The portions are copied from at least three different input frames to at least two successive frames of the second sequence without changing the respective x, y coordinates of the pixels in the object and such that at least one of the frames of the second sequence contains at least two portions that appear at different frames in the first sequence.Type: GrantFiled: November 15, 2006Date of Patent: January 24, 2012Assignee: Yissum Research Development Company of the Hebrew University of JerusalemInventors: Shmuel Peleg, Alexander Rav-Acha
-
Publication number: 20110043604Abstract: A panoramic image is generated from a sequence of input frames captured by a camera that translates relative to a scene having at least two points at different distances from the camera. A processor (13) is responsive to optical flow between corresponding points in temporally different input frames for computing flow statistics for at least portions of some of the input frames and for computing respective stitching costs between some of the portions and respective neighboring portions thereof. A selection unit (18) selects a sequence of portions and respective neighboring portions that minimizes a cost function that is a function of the flow statistics and stitching costs. A stitching unit (21) stitches the selected portions and respective neighboring portions so as to form a panoramic image of the scene, which may then be displayed or post-processed.Type: ApplicationFiled: March 13, 2008Publication date: February 24, 2011Applicant: Yissum Research Development Company of the Hebrew University of JerusalemInventors: Shmuel Peleg, Alex Rav-Acha, Giora Engel
-
Patent number: 7852370Abstract: A computer-implemented method and system for transforming a first sequence of video frames of a first dynamic scene captured at regular time intervals to a second sequence of video frames depicting a second dynamic scene wherein for at least two successive frames of the second sequence, there are selected from at least three different frames of the first sequence portions that are spatially contiguous in the first dynamic scene and copied to a corresponding frame of the second sequence so as to maintain their spatial continuity in the first sequence. In a second aspect, for at least one feature in the first dynamic scene respective portions of the first sequence of video frames are sampled at a different rate than surrounding portions of the first sequence of video frames; and the sampled portions are copied to a corresponding frame of the second sequence.Type: GrantFiled: November 2, 2005Date of Patent: December 14, 2010Assignee: Yissum Research Development Company of the Hebrew University of JerusalemInventors: Shmuel Peleg, Alexander Rav-Acha, Daniel Lischinski
-
Publication number: 20100232729Abstract: Natural looking output images are computed from input images based on given user constraints. Pixels in the output images are assigned a shift such that the respective output pixel value is derived from the value of the input pixel whose location is related to that of the output pixel by the shift, at least one shift being non-zero. The shift is determined by an optimization process adapted to minimize a cost function that includes a data term on the shifts of single pixels and a smoothness term on the shifts of pixel pairs. The output image is computed by applying the optimized shift-map between the input and output pixels. The data term can include shift constraints that limit the location in the output images of selected input pixels, and saliency constraints, indicating a preference that selected pixels in the input images will or will not appear in the output image.Type: ApplicationFiled: March 12, 2010Publication date: September 16, 2010Applicant: Yissum Research Development Company of the Hebrew University of JerusalemInventors: Shmuel Peleg, Yael Pritch, Eitam Kav Venaki
-
Publication number: 20100220209Abstract: A system and method for generating a mosaic image from respective regions in a plurality of individual images, at least one of the regions being distorted and having a left and/or right edge that is tilted relative to a direction of view of the respective image. The distorted regions are rectified so as to form a respective rectified rectangular region and at least some of the rectified rectangular regions are mosaiced to form the mosaic image.Type: ApplicationFiled: March 4, 2010Publication date: September 2, 2010Applicants: Yissum Research Development Company of the Hebrew University, Emaki, Inc.Inventors: Shmuel Peleg, Assaf Zomet, Chetan Arora, Takeo Miyazawa
-
Publication number: 20100125581Abstract: Computer-implemented method, system, and techniques for summarization, searching, and indexing of video are provided, wherein data related to objects detected in the video in a selected time interval is received and the objects are clustered into clusters such that each cluster includes objects that are similar in respect to a selected feature or a combination of features. A video summary is generated based on the computed clusters.Type: ApplicationFiled: November 20, 2009Publication date: May 20, 2010Inventors: Shmuel Peleg, Yael Pritch, Sarit Ratovitch, Avishai Hendel
-
Publication number: 20100098340Abstract: A method of selecting images for lenticular printing. The method comprises receiving a sequence having a plurality of images, selecting a segment of the sequence according to one or more lenticular viewing measures, and outputting the segment for allowing the lenticular printing.Type: ApplicationFiled: January 15, 2008Publication date: April 22, 2010Inventors: Assaf Zomet, Shmuel Peleg, Ben Denon
-
Publication number: 20100092037Abstract: In a system and method for generating a synopsis video from a source video, at least three different source objects are selected according to one or more defined constraints, each source object being a connected subset of image points from at least three different frames of the source video. One or more synopsis objects are sampled from each selected source object by temporal sampling using image points derived from specified time periods. For each synopsis object a respective time for starting its display in the synopsis video is determined, and for each synopsis object and each frame a respective color transformation for displaying the synopsis object may be determined. The synopsis video is displayed by displaying selected synopsis objects at their respective time and color transformation, such that in the synopsis video at least three points that each derive from different respective times in the source video are displayed simultaneously.Type: ApplicationFiled: December 9, 2007Publication date: April 15, 2010Applicant: Yissum Research Develpoment Company of the Hebrew University of JerusalemInventors: Shmuel Peleg, Yael Pritch, Alexander Rav-Acha, Avital Gutman
-
Publication number: 20090219300Abstract: A computer-implemented method and system transforms a first sequence of video frames of a first dynamic scene to a second sequence of at least two video frames depicting a second dynamic scene. A subset of video frames in the first sequence is obtained that show movement of at least one object having a plurality of pixels located at respective x, y coordinates and portions from the subset are selected that show non-spatially overlapping appearances of the at least one object in the first dynamic scene. The portions are copied from at least three different input frames to at least two successive frames of the second sequence without changing the respective x, y coordinates of the pixels in the object and such that at least one of the frames of the second sequence contains at least two portions that appear at different frames in the first sequence.Type: ApplicationFiled: November 15, 2006Publication date: September 3, 2009Applicant: YISSUM RESEARCH DEVEOPMENT COMPANY OF THE HEBREW UNIVERSITY OF JERUSALEMInventors: Shmuel Peleg, Alexander Rav-Acha
-
Patent number: 7477284Abstract: Method and apparatus for generating images of a scene from image data of the scene and displaying the images to provide a sense of depth. In some embodiments of the method and apparatus the generated images are mosaics.Type: GrantFiled: April 19, 2001Date of Patent: January 13, 2009Assignee: Yissum Research Development Company of the Hebrew University of JerusalemInventors: Shmuel Peleg, Moshe Ben-Ezra, Yael Pritch
-
Patent number: 7373019Abstract: A super-resolution enhanced image generating system is described for generating a super-resolution-enhanced image from an image of a scene, identified as image g0, comprising a base image and at least one other image gi, the system comprising an initial super-resolution enhanced image generator, an image projector module and a super-resolution enhanced image estimate update generator module. The initial super-resolution enhanced image generator module is configured to use the image g0 to generate a super-resolution enhanced image estimate. The image projector module is configured to selectively use a warping, a blurring and/or a decimation operator associated with the image gi to generate a projected super-resolution enhanced image estimate. The super-resolution enhanced image estimate update generator module is configured to use the input image gi and the super-resolution enhanced image estimate to generate an updated super-resolution enhanced image estimate.Type: GrantFiled: December 4, 2006Date of Patent: May 13, 2008Assignee: Yissum Research DevelopmentInventors: Assaf Zomet, Shmuel Peleg
-
Publication number: 20070165022Abstract: A method for automated computerized audio visual dubbing of a movie comprises (i) generating a three-dimensional head model of an actor in a movie, the head model being representative of specific facial features of the actor, and (ii) generating a three-dimensional head model of a dubber making target sounds for the actor, the dubber head model being representative of specific facial features of the dubber, as the target sounds are made. The method also comprises modifying at least a portion of the specific facial features of the actor head model according to the dubber head model such that the actor appears to be producing target sounds made by the dubber.Type: ApplicationFiled: August 1, 2006Publication date: July 19, 2007Inventors: Shmuel Peleg, Ran Cohen, David Avnir
-
Publication number: 20070133903Abstract: A super-resolution enhanced image generating system is described for generating a super-resolution-enhanced image from an image of a scene, identified as image g0, comprising a base image and at least one other image gi, the-system comprising an initial super-resolution enhanced image generator, an image projector module and a super-resolution enhanced image estimate update generator module. The initial super-resolution enhanced image generator module is configured to use the image g0 to generate a super-resolution enhanced image estimate. The image projector module is configured to selectively use a warping, a blurring and/or a decimation operator associated with the image gi to generate a projected super-resolution enhanced image estimate. The super-resolution enhanced image estimate update generator module is configured to use the input image gi and the super-resolution enhanced image estimate to generate an updated super-resolution enhanced image estimate.Type: ApplicationFiled: December 4, 2006Publication date: June 14, 2007Applicant: Yissum Research DevelopmentInventors: Assaf Zomet, Shmuel Peleg
-
Publication number: 20060262184Abstract: A computer-implemented method and system for transforming a first sequence of video frames of a first dynamic scene captured at regular time intervals to a second sequence of video frames depicting a second dynamic scene wherein for at least two successive frames of the second sequence, there are selected from at least three different frames of the first sequence portions that are spatially contiguous in the first dynamic scene and copied to a corresponding frame of the second sequence so as to maintain their spatial continuity in the first sequence. In a second aspect, for at least one feature in the first dynamic scene respective portions of the first sequence of video frames are sampled at a different rate than surrounding portions of the first sequence of video frames; and the sampled portions are copied to a corresponding frame of the second sequence.Type: ApplicationFiled: November 2, 2005Publication date: November 23, 2006Applicant: YISSUM RESEARCH DEVELOPMENT COMPANY OF THE HEBREW UNIVERSITY OF JERUSALEMInventors: Shmuel Peleg, Alexander Rav-Acha, Daniel Lischinski
-
Publication number: 20060215934Abstract: A computer-implemented method and system determines camera movement of a new frame relative to a sequence of frames of images containing at least one dynamic object and for which relative camera movement is assumed. From changes in color values of sets of pixels in different frames of the sequence for which respective locations of all pixels in each set are adjusted so as to neutralize the effect of camera movement between the respective frames in the sequence containing the pixels, corresponding color values of the pixels in the new frame are predicted and used to determine camera movement as a relative movement of the new frame and the predicted frame. An embodiment of the invention maintains an aligned space-time volume of frames for which camera movement is neutralized and adds each new frame to the aligned space-time volume after neutralizing camera movement in the new frame.Type: ApplicationFiled: March 20, 2006Publication date: September 28, 2006Applicants: Yissum Research Development Co of the Hebrew University of Jerusalem Israeli Co, HumanEyes Technologies Ltd. Israeli CoInventors: Shmuel Peleg, Alexander Rav-Acha, Yael Pritch
-
Patent number: 7109993Abstract: A method for automated computerized audio visual dubbing of a movie comprises (i) generating a three-dimensional head model of an actor in a movie, the head model being representative of specific facial features of the actor, and (ii) generating a three-dimensional head model of a dubber making target sounds for the actor, the dubber head model being representative of specific facial features of the dubber, as the target sounds are made. The method also comprises modifying at least a portion of the specific facial features of the actor head model according to the dubber head model such that the actor appears to be producing target sounds made by the dubber.Type: GrantFiled: October 24, 2002Date of Patent: September 19, 2006Assignee: Yissum Research Development Company of the Hebrew University of JerusalemInventors: Shmuel Peleg, Ran Cohen, David Avnir
-
Publication number: 20060120625Abstract: A system is described for generating a rectified mosaic image from a plurality of individual images, the system comprising a quadrangular region defining module, a warping module and a mosaicing module. The quadrangular region defining module is configured to define in one individual image a quadrangular region in relation to two points on a vertical anchor in the one individual image and mappings of two points on a vertical anchor in at least one other individual image into the one individual image. The warping module is configured to warp the quadrangular region to a rectangular region. The mosaicing module configured to mosaic the quadrangular region to the mosaic image. The system further generates a mosaic from a plurality of panoramic images, the system comprising a motion determining module, a normalizing module, a strip selection module, and a mosaicing module. The motion determining module is configured to determine image motion between two panoramic images.Type: ApplicationFiled: November 10, 2005Publication date: June 8, 2006Applicants: Yissum Research Development Company of the Hebrew University, Emaki, Inc,Inventors: Shmuel Peleg, Assaf Zomet, Chetan Arora, Takeo Miyazawa
-
Patent number: 7006124Abstract: Video mosaicing is commonly used to increase the visual field of view by pasting together many video frames. The invention provides for image mosaicing for general camera motion, including forward camera motion and zoom. After computing the motion between the images in a sequence, strips are selected from individual frames such that the strips are approximately perpendicular to the optical flow. The strips are warped such that the optical flow becomes parallel, and are pasted to a panoramic mosaic. The warping transformation on the strips, which results in making the optical flow to be parallel, can be modeled by an oblique projection of the image onto a cylindrical surface whose central axis is the trajectory of the camera. In addition, this invention uses view interpolation to generate dense intermediate views between original video frames, such that these intermediate views are used to overcome effects of motion parallax when creating panoramic mosaics.Type: GrantFiled: September 16, 2002Date of Patent: February 28, 2006Assignee: Yissum Research Development Company of the Hebrew University of JerusalemInventors: Shmuel Peleg, Benny Rousso