Patents by Inventor Bjorn Stenger
Bjorn Stenger has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20140210830Abstract: A method of animating a computer generation of a head, the head having a mouth which moves in accordance with speech to be output by the head, said method comprising: providing an input related to the speech which is to be output by the movement of the lips; dividing said input into a sequence of acoustic units; selecting expression characteristics for the inputted text; converting said sequence of acoustic units to a sequence of image vectors using a statistical model, wherein said model has a plurality of model parameters describing probability distributions which relate an acoustic unit to an image vector, said image vector comprising a plurality of parameters which define a face of said head; and outputting said sequence of image vectors as video such that the mouth of said head moves to mime the speech associated with the input text with the selected expression, wherein a parameter of a predetermined type of each probability distribution in said selected expression is expressed as a weighted sum of paType: ApplicationFiled: January 29, 2014Publication date: July 31, 2014Applicant: Kabushiki Kaisha ToshibaInventors: Javier LATORRE-MARTINEZ, Vincent Ping Leung Wan, Bjorn Stenger, Robert Anderson, Roberto Cipolla
-
Publication number: 20140210831Abstract: A method of animating a computer generation of a head, the head having a mouth which moves in accordance with speech to be output by the head, said method comprising: providing an input related to the speech which is to be output by the movement of the mouth; dividing said input into a sequence of acoustic units; selecting an expression to be output by said head; converting said sequence of acoustic units to a sequence of image vectors using a statistical model, wherein said model has a plurality of model parameters describing probability distributions which relate an acoustic unit to an image vector for a selected expression, said image vector comprising a plurality of parameters which define a face of said head; and outputting said sequence of image vectors as video such that the mouth of said head moves to mime the speech associated with the input text with the selected expression, wherein the image parameters define the face of a head using an appearance model comprising a plurality of shape modes andType: ApplicationFiled: January 29, 2014Publication date: July 31, 2014Applicant: Kabushiki Kaisha ToshibaInventors: Bjorn Stenger, Robert Anderson, Javier Latorre-Martinez, Vincent Ping Leung Wan, Roberto Cipolla
-
Patent number: 8761472Abstract: An object location method includes: analyzing data including plural objects each including plural features, and extracting the features from the data; matching features stored in a database with those extracted from the data, and deriving a prediction of the object, each feature extracted from the data providing a vote for at least one prediction; expressing the prediction to be analyzed in a Hough space, the objects to be analyzed being described by n parameters and each parameter defining a dimension of the Hough space, n is an integer of at least one; providing a constraint by applying a higher weighting to votes which agree with votes from other features than those votes which do not agree with votes from other features; finding local maxima in the Hough space using the weighted votes; and identifying the predictions associated with the local maxima to locate the objects provided in the data.Type: GrantFiled: February 29, 2012Date of Patent: June 24, 2014Assignee: Kabushiki Kaisha ToshibaInventors: Oliver Woodford, Minh-Tri Pham, Atsuto Maki, Frank Perbet, Bjorn Stenger
-
Patent number: 8750614Abstract: According to one embodiment, a method of classifying a feature in a video sequence includes selecting a target region of a frame of the video sequence, where the target region contains the feature; dividing the target region into a plurality cells, calculating histograms of optic flow with the cells comparing the histograms of optic flow for pairs of cells; and assigning the feature to a class based at least in part on the result of the comparison.Type: GrantFiled: September 22, 2011Date of Patent: June 10, 2014Assignee: Kabushiki Kaisha ToshibaInventors: Atsuto Maki, Frank Perbet, Bjorn Stenger, Oliver Woodford, Roberto Cipolla
-
Publication number: 20140125773Abstract: A method of calculating a similarity measure between first and second image patches, which include respective first and second intensity values associated with respective elements of the first and second image patches, and which have a corresponding size and shape such that each element of the first image patch corresponds to an element on the second image patch. The method: determines a set of sub-regions on the second image patch corresponding to elements of the first image patch and having first intensity values within a range defined for that sub-region; calculates variance, for each sub-region of the set over all of the elements of that sub-region, of a function of the second intensity value associated with that element and the first intensity value associated with the corresponding element of the first image patch; and calculates similarity measure as the sum over all sub-regions of the calculated variances.Type: ApplicationFiled: November 5, 2013Publication date: May 8, 2014Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Atsuto MAKI, Riccardo Gherardi, Oliver Woodford, Frank Perbet, Minh-Tri Pham, Bjorn Stenger, Sam Johnson, Roberto Cipolla
-
Patent number: 8712154Abstract: A method dividing an image into plural superpixels of plural pixels of the image. The method calculates an initial set of weights from a measure of similarity between pairs of pixels, from which a resultant set of weights is calculated for pairs of pixels that are less that a threshold distance apart on the image. The calculation calculates a weight for a pair of pixels as the sum over a set of third pixels of the product of initial weight of the first pixel of the pair of pixel with the third pixel and the weight of the third pixel with the second pixel. Each weight is then subjected to a power coefficient operation. The resultant set of weights and the initial set of weights are then compared to check for convergence. If the weights converge, the converged set of weights is used to divide the image into superpixels.Type: GrantFiled: February 29, 2012Date of Patent: April 29, 2014Assignee: Kabushiki Kaisha ToshibaInventors: Frank Perbet, Atsuto Maki, Minh-Tri Pham, Bjorn Stenger, Oliver Woodford
-
Publication number: 20130051639Abstract: An object location method includes: analysing data including plural objects each including plural features, and extracting the features from the data; matching features stored in a database with those extracted from the data, and deriving a prediction of the object, each feature extracted from the data providing a vote for at least one prediction; expressing the prediction to be analysed in a Hough space, the objects to be analysed being described by n parameters and each parameter defining a dimension of the Hough space, n is an integer of at least one; providing a constraint by applying a higher weighting to votes which agree with votes from other features than those votes which do not agree with votes from other features; finding local maxima in the Hough space using the weighted votes; and identifying the predictions associated with the local maxima to locate the objects provided in the data.Type: ApplicationFiled: February 29, 2012Publication date: February 28, 2013Applicant: Kabushiki Kaisha ToshibaInventors: Oliver Woodford, Minh-Tri Pham, Atsuto Maki, Frank Perbet, Bjorn Stenger
-
Publication number: 20130016913Abstract: A method of comparing two object poses, wherein each object pose is expressed in terms of position, orientation and scale with respect to a common coordinate system, the method comprising: calculating a distance between the two object poses, the distance being calculated using the distance function: d sRt ? ( X , Y ) = d s 2 ? ( X , Y ) ? s 2 + d r 2 ? ( X , Y ) ? r 2 + d t 2 ? ( X , Y ) ? t 2 .Type: ApplicationFiled: February 28, 2012Publication date: January 17, 2013Applicant: Kabushiki Kaisha ToshibaInventors: Minh-Tri Pham, Oliver Woodford, Frank Perbet, Atsuto Maki, Bjorn Stenger, Roberto Cipolla
-
Publication number: 20120287247Abstract: A system for capturing 3D image data of a scene, including three light sources, each configured to emit light at a different wavelength to the other two sources and to illuminate the scene to be captured; a first video camera configured to receive light from the light sources which has been reflected from the scene, to isolate light received from each of the light sources, and to output data relating to the image captured for each of the three light sources; a depth sensor configured to capture depth map data of the scene; and an analysis unit configured to receive data from the first video camera and process the data to obtain data relating to a normal field obtained from the images captured for each of the three light sources, and to combine the normal field data with the depth map data to capture 3D image data of the scene.Type: ApplicationFiled: February 29, 2012Publication date: November 15, 2012Applicant: Kabushiki Kaisha ToshibaInventors: Bjorn STENGER, Atsuto MAKI, Frank PERBET, Oliver WOODFORD, Roberto CIPOLLA, Robert ANDERSON
-
Publication number: 20120251003Abstract: A method dividing an image into plural superpixels of plural pixels of the image. The method calculates an initial set of weights from a measure of similarity between pairs of pixels, from which a resultant set of weights is calculated for pairs of pixels that are less that a threshold distance apart on the image. The calculation calculates a weight for a pair of pixels as the sum over a set of third pixels of the product of initial weight of the first pixel of the pair of pixel with the third pixel and the weight of the third pixel with the second pixel. Each weight is then subjected to a power coefficient operation. The resultant set of weights and the initial set of weights are then compared to check for convergence. If the weights converge, the converged set of weights is used to divide the image into superpixels.Type: ApplicationFiled: February 29, 2012Publication date: October 4, 2012Applicant: Kabushiki Kaisha ToshibaInventors: Frank Perbet, Atsuto Maki, Minh-Tri Pham, Bjorn Stenger, Oliver Woodford
-
Publication number: 20120224744Abstract: A moving feature is recognized in a video sequence by comparing its movement with a characteristic pattern. Possible trajectories through the video sequence are generated for an object by identifying potential matches of points in pairs of frames of the video sequence. When looking for the characteristic pattern, a number of possible trajectories are analyzed. The possible trajectories may be selected so that they are suitable for analysis. This may include selecting longer trajectories that can be easier to analyze. Thereby where the object being tracked is momentarily behind another object a continuous trajectory is generated.Type: ApplicationFiled: August 6, 2009Publication date: September 6, 2012Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Frank Perbet, Atsuto Maki, Bjorn Stenger
-
Publication number: 20120219184Abstract: A characteristic motion in a video is identified by determining pairs of moving features that have an indicative relationship between the motions of the two moving features in the pair. For example, the motion of a pedestrian is identified by an indicative relationship between the motions of the pedestrian's feet. This indicative relationship may be that one of the feet moves relative to the surroundings while the other remains stationary.Type: ApplicationFiled: August 6, 2009Publication date: August 30, 2012Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Atsuto Maki, Frank Perbet, Bjorn Stenger
-
Publication number: 20120082381Abstract: According to one embodiment, a method of classifying a feature in a video sequence includes selecting a target region of a frame of the video sequence, where the target region contains the feature; dividing the target region into a plurality cells, calculating histograms of optic flow with the cells comparing the histograms of optic flow for pairs of cells; and assigning the feature to a class based at least in part on the result of the comparison.Type: ApplicationFiled: September 22, 2011Publication date: April 5, 2012Applicant: Kabushiki Kaisha ToshibaInventors: Atsuto MAKI, Frank PERBET, Bjorn STENGER, Oliver WOODFORD, Roberto CIPOLLA
-
Patent number: 7844921Abstract: An apparatus for a user to interface with a control object apparatus by a posture or a motion of the user's physical part. An image input unit inputs an image including the user's physical part. A gesture recognition unit recognizes the posture or the motion of the user's physical part from the image. A control unit controls the control object apparatus based on an indication corresponding to the posture or the motion. A gesture information display unit displays an exemplary image of the posture or the motion recognized for the user's reference to indicate the control object apparatus.Type: GrantFiled: June 1, 2007Date of Patent: November 30, 2010Assignee: Kabushiki Kaisha ToshibaInventors: Tsukasa Ike, Yasuhiro Taniguchi, Ryuzo Okada, Nobuhisa Kishikawa, Kentaro Yokoi, Mayumi Yuasa, Bjorn Stenger
-
Publication number: 20080052643Abstract: An apparatus for a user to interface with a control object apparatus by a posture or a motion of the user's physical part. An image input unit inputs an image including the user's physical part. A gesture recognition unit recognizes the posture or the motion of the user's physical part from the image. A control unit controls the control object apparatus based on an indication corresponding to the posture or the motion. A gesture information display unit displays an exemplary image of the posture or the motion recognized for the user's reference to indicate the control object apparatus.Type: ApplicationFiled: June 1, 2007Publication date: February 28, 2008Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Tsukasa IKE, Yasuhiro Taniguchi, Ryuzo Okada, Nobuhisa Kishikawa, Kentaro Yokoi, Mayumi Yuasa, Bjorn Stenger
-
Publication number: 20060284837Abstract: A similarity calculation unit calculates a similarity between a hand candidate area image and a template image. A consistency probability calculation unit and an inconsistency probability calculation unit use probability distributions of similarities of a case where hand shapes of the template image and the hand candidate area image are consistent with each other and a case where they are not consistent, and calculate a consistency probability and an inconsistency probability of hand shapes between each of the template images and the hand candidate area image. A hand shape determination unit determines a hand shape most similar to the hand candidate area image based on the consistency probability and the inconsistency probability calculated for each hand shape, and outputs it as a recognition result.Type: ApplicationFiled: June 8, 2006Publication date: December 21, 2006Inventors: Bjorn Stenger, Tsukasa Ike