Patents by Inventor Roberto Cipolla
Roberto Cipolla has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8437549Abstract: An image processing apparatus robustly performs segmentation on an image including an object such as a moving person with its deformation. The image processing apparatus includes: an image inputting unit which receives temporally successive images; a motion analyzing unit which calculates motions of blocks using at least two temporally different images and calculates, based on the motions of the blocks, temporal motion trajectories of the blocks in the temporally successive images; a distance calculating unit which calculates a distance indicating a similarity of the motions of the blocks, using a temporal motion trajectory of a block i and a temporal motion trajectory of a block other than the block i calculated by the motion analyzing unit; and a nonlinear space processing unit which projects the distance calculated by the distance calculating unit into a nonlinear space and performs the segmentation on a result of the projection in the nonlinear space.Type: GrantFiled: March 14, 2008Date of Patent: May 7, 2013Assignee: Panasonic CorporationInventors: Masahiro Iwasaki, Arasanathan Thayananthan, Roberto Cipolla
-
Patent number: 8417021Abstract: We describe methods of characterizing a set of images to determine their respective illumination, for example for recovering the 3D shape of an illuminated object. The method comprises: inputting a first set of images of the object captured from different positions; determining frontier point data from the images, this defining a plurality of frontier points on the object and for each said frontier point a direction of a normal to the surface of the object at the frontier point, and determining data defining the image capture positions; inputting a second set of images of said object, having substantially the same viewpoint and different illumination conditions; and characterizing the second set of images said frontier point data to determine data comprising object reflectance parameter data (?) and, for each image of said second set, illumination data (L) comprising data defining an illumination direction and illumination intensity for the image.Type: GrantFiled: October 12, 2006Date of Patent: April 9, 2013Assignee: Cambridge University Technical Services LimitedInventors: Roberto Cipolla, George Vogiatzis, Paolo Favaro, Ryuji Funayama, Hiromichi Yanagihara
-
Publication number: 20130084008Abstract: A method (100) and system (300) is described for processing video data comprising a plurality of images. The method and apparatus is for obtaining for labelling of a plurality of objects or regions in an image of a sequence of images followed by label propagation to other images in the sequence based on an inference step and a model.Type: ApplicationFiled: February 28, 2011Publication date: April 4, 2013Applicants: CAMBRIDGE ENTERPRISE LIMITED, TOYOTA MOTOR EUROPE NV/SAInventors: Gabriel Othmezouri, Ichiro Sakata, Roberto Cipolla, Vijay Badrinarayanan
-
Publication number: 20130016913Abstract: A method of comparing two object poses, wherein each object pose is expressed in terms of position, orientation and scale with respect to a common coordinate system, the method comprising: calculating a distance between the two object poses, the distance being calculated using the distance function: d sRt ? ( X , Y ) = d s 2 ? ( X , Y ) ? s 2 + d r 2 ? ( X , Y ) ? r 2 + d t 2 ? ( X , Y ) ? t 2 .Type: ApplicationFiled: February 28, 2012Publication date: January 17, 2013Applicant: Kabushiki Kaisha ToshibaInventors: Minh-Tri Pham, Oliver Woodford, Frank Perbet, Atsuto Maki, Bjorn Stenger, Roberto Cipolla
-
Publication number: 20120287247Abstract: A system for capturing 3D image data of a scene, including three light sources, each configured to emit light at a different wavelength to the other two sources and to illuminate the scene to be captured; a first video camera configured to receive light from the light sources which has been reflected from the scene, to isolate light received from each of the light sources, and to output data relating to the image captured for each of the three light sources; a depth sensor configured to capture depth map data of the scene; and an analysis unit configured to receive data from the first video camera and process the data to obtain data relating to a normal field obtained from the images captured for each of the three light sources, and to combine the normal field data with the depth map data to capture 3D image data of the scene.Type: ApplicationFiled: February 29, 2012Publication date: November 15, 2012Applicant: Kabushiki Kaisha ToshibaInventors: Bjorn STENGER, Atsuto MAKI, Frank PERBET, Oliver WOODFORD, Roberto CIPOLLA, Robert ANDERSON
-
Publication number: 20120106794Abstract: A trajectory estimation apparatus includes: an image acceptance unit which accepts images that are temporally sequential and included in the video; a hierarchical subregion generating unit which generates subregions at hierarchical levels by performing hierarchical segmentation on each of the images accepted by the image acceptance unit such that, among subregions belonging to hierarchical levels different from each other, a spatially larger subregion includes spatially smaller subregions; and a representative trajectory estimation unit which estimates, as a representative trajectory, a trajectory, in the video, of a subregion included in a certain image, by searching for a subregion that is most similar to the subregion included in the certain image, across hierarchical levels in an image different from the certain image.Type: ApplicationFiled: December 23, 2011Publication date: May 3, 2012Inventors: Masahiro IWASAKI, Kunio NOBORI, Ayako KOMOTO, Fabio GALASSO, Roberto CIPOLLA
-
Publication number: 20120082381Abstract: According to one embodiment, a method of classifying a feature in a video sequence includes selecting a target region of a frame of the video sequence, where the target region contains the feature; dividing the target region into a plurality cells, calculating histograms of optic flow with the cells comparing the histograms of optic flow for pairs of cells; and assigning the feature to a class based at least in part on the result of the comparison.Type: ApplicationFiled: September 22, 2011Publication date: April 5, 2012Applicant: Kabushiki Kaisha ToshibaInventors: Atsuto MAKI, Frank PERBET, Bjorn STENGER, Oliver WOODFORD, Roberto CIPOLLA
-
Publication number: 20110292179Abstract: According to one embodiment, an apparatus for determining the gradients of the surface normals of an object includes a receiving unit, establishing unit, determining unit, and selecting unit. The receiving unit is configured to receive data of three 2D images of the object, wherein each image is taken under illumination from a different direction. The establishing unit is configured to establish which pixels of the image are in shadow such that there is only data available from two images from these pixels. The determining unit is configured to determine a range of possible solutions for the gradient of the surface normal of a shadowed pixel using the data available for the two images. The selecting unit is configured to select a solution for the gradient using the integrability of the gradient field over an area of the object as a constraint and minimising a cost function.Type: ApplicationFiled: April 8, 2011Publication date: December 1, 2011Inventors: Carlos HERNANDEZ, George Vogiatzis, Roberto Cipolla
-
Patent number: 7970205Abstract: An image processing device which simultaneously secures and extracts a background image, at least two object images, a shape of each object image and motion of each object image, from among plural images, the image processing device including an image input unit (101) which accepts input of plural images; a hidden parameter estimation unit (102) which estimates a hidden parameter based on the plural images and a constraint enforcement parameter, which indicates a condition of at least one of the hidden parameters, using an iterative learning method; a constraint enforcement parameter learning unit (103) which learns a constraint enforcement parameter related to the hidden parameter using an estimation result from the hidden parameter estimation unit as a training signal; and a complementary learning unit (104) which causes the estimation of the hidden parameter and the learning of the constraint enforcement parameter, which utilize the result from the learning of the hidden parameter, to be iterated.Type: GrantFiled: December 1, 2006Date of Patent: June 28, 2011Assignee: Panasonic CorporationInventors: Masahiro Iwasaki, Arasanathan Thayananthan, Roberto Cipolla
-
Publication number: 20110013840Abstract: To provide an image processing apparatus which robustly performs segmentation on an image including an object such as a moving person with its deformation. The image processing apparatus includes: an image inputting unit (101) which receives temporally successive images; a motion analyzing unit (102) which calculates motions of blocks using at least two temporally different images and calculates, based on the motions of the blocks, temporal motion trajectories of the blocks in the temporally successive images; a distance calculating unit (103) which calculates a distance which indicates a similarity of the motions of the blocks, using a temporal motion trajectory of a block i and a temporal motion trajectory of a block other than the block i calculated by the motion analyzing unit; and a nonlinear space processing unit (104) which projects the distance calculated by the distance calculating unit into a nonlinear space and performs the segmentation on a result of the projection in the nonlinear space.Type: ApplicationFiled: March 14, 2008Publication date: January 20, 2011Inventors: Masahiro Iwasaki, Arasanathan Thayananthan, Roberto Cipolla
-
Publication number: 20100239123Abstract: A method (100) and system (300) is described for processing video data comprising a plurality of images. The method (100) comprising obtaining (104, 106), for each of the plurality of images, a segmentation in a plurality of regions and a set of keypoints, and tracking (108) at least one region between a first image and a subsequent image resulting in a matched region in the subsequent image taking into account a matching between keypoints in the first image and the subsequent image. The latter results in accurate tracking of regions. Furthermore the method may optionally also perform label propagation taking into account keypoint tracking.Type: ApplicationFiled: October 13, 2008Publication date: September 23, 2010Inventors: Ryuji Funayama, Hiromichi Yanagihara, Julien Fauqueur, Gabriel Brostow, Roberto Cipolla
-
Publication number: 20100128926Abstract: An image processing device which simultaneously secures and extracts a background image, at least two object images, a shape of each object image and motion of each object image, from among plural images, the image processing device including an image input unit (101) which accepts input of plural images; a hidden parameter estimation unit (102) which estimates a hidden parameter based on the plural images and a constraint enforcement parameter, which indicates a condition of at least one of the hidden parameters, using an iterative learning method; a constraint enforcement parameter learning unit (103) which learns a constraint enforcement parameter related to the hidden parameter using an estimation result from the hidden parameter estimation unit as a training signal; and a complementary learning unit (104) which causes the estimation of the hidden parameter and the learning of the constraint enforcement parameter, which utilize the result from the learning of the hidden parameter, to be iterated.Type: ApplicationFiled: December 1, 2006Publication date: May 27, 2010Inventors: Masahiro Iwasaki, Arasanathan Thayananthan, Roberto Cipolla
-
Patent number: 7577280Abstract: A method of measurement of mitotic activity from histopathological specimen images initially identifies image pixels with luminances corresponding to mitotic figures and selects from them a reference pixel to provide a reference color. Pixels similar to the reference color are located; image regions are grown on located pixels by adding pixels satisfying thresholds of differences to background and image region luminances. Grown regions are thresholded in area, compactness, width/height ratio, luminance ratio to background and difference between areas grown with perturbed thresholds. Grown regions are counted as indicating mitotic figures by thresholding region number, area and luminance. An alternative method of measuring mitotic activity measures a profile of an image region and counts the image region as corresponding to a mitotic figure if its profile is above a threshold at an intensity associated with mitotic figures.Type: GrantFiled: November 13, 2003Date of Patent: August 18, 2009Assignee: Qinetiq LimitedInventors: Christelle Marie Guittet, Paul Gerard Ducksbury, Maria Petrou, Anastasios Kesidis, Roberto Cipolla, Margaret Jai Varga
-
Publication number: 20090169096Abstract: We describe methods of characterising a set of images to determine their respective illumination, for example for recovering the 3D shape of an illuminated object. The method comprises: inputting a first set of images of the object captured from different positions; determining frontier point data from the images, this defining a plurality of frontier points on the object and for each said frontier point a direction of a normal to the surface of the object at the frontier point, and determining data defining the image capture positions; inputting a second set of images of said object, having substantially the same viewpoint and different illumination conditions; and characterising the second set of images said frontier point data to determine data comprising object reflectance parameter data (?) and, for each image of said second set, illumination data (L) comprising data defining an illumination direction and illumination intensity for the image.Type: ApplicationFiled: October 12, 2006Publication date: July 2, 2009Inventors: Roberto Cipolla, George Vogiatzis, Paolo Favaro, Ryuji Funayama, Hiromichi Yanagihara
-
Publication number: 20090073259Abstract: An imaging system for imaging a moving three dimensional object, the system comprising: at least three light sources, irradiating the object from three different angles, a video camera provided to collect radiation from said three light sources which has been reflected from said object; and an image processor, wherein each light source emits radiation of a different frequency and said image processor is configured to distinguish between the reflected signal from the three different light sources.Type: ApplicationFiled: September 19, 2008Publication date: March 19, 2009Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Carlos HERNANDEZ, Gabriel Julian BROSTOW, Roberto Cipolla
-
Patent number: 5581276Abstract: A 3D human interface apparatus using a motion recognition based on a dynamic image processing in which the motion of an operator operated object as an imaging target can be recognized accurately and stably.Type: GrantFiled: September 8, 1993Date of Patent: December 3, 1996Assignee: Kabushiki Kaisha ToshibaInventors: Roberto Cipolla, Yasukazu Okamoto, Yoshinori Kuno