Patents by Inventor Bjorn Stenger

Bjorn Stenger has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20140210830
    Abstract: A method of animating a computer generation of a head, the head having a mouth which moves in accordance with speech to be output by the head, said method comprising: providing an input related to the speech which is to be output by the movement of the lips; dividing said input into a sequence of acoustic units; selecting expression characteristics for the inputted text; converting said sequence of acoustic units to a sequence of image vectors using a statistical model, wherein said model has a plurality of model parameters describing probability distributions which relate an acoustic unit to an image vector, said image vector comprising a plurality of parameters which define a face of said head; and outputting said sequence of image vectors as video such that the mouth of said head moves to mime the speech associated with the input text with the selected expression, wherein a parameter of a predetermined type of each probability distribution in said selected expression is expressed as a weighted sum of pa
    Type: Application
    Filed: January 29, 2014
    Publication date: July 31, 2014
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Javier LATORRE-MARTINEZ, Vincent Ping Leung Wan, Bjorn Stenger, Robert Anderson, Roberto Cipolla
  • Publication number: 20140210831
    Abstract: A method of animating a computer generation of a head, the head having a mouth which moves in accordance with speech to be output by the head, said method comprising: providing an input related to the speech which is to be output by the movement of the mouth; dividing said input into a sequence of acoustic units; selecting an expression to be output by said head; converting said sequence of acoustic units to a sequence of image vectors using a statistical model, wherein said model has a plurality of model parameters describing probability distributions which relate an acoustic unit to an image vector for a selected expression, said image vector comprising a plurality of parameters which define a face of said head; and outputting said sequence of image vectors as video such that the mouth of said head moves to mime the speech associated with the input text with the selected expression, wherein the image parameters define the face of a head using an appearance model comprising a plurality of shape modes and
    Type: Application
    Filed: January 29, 2014
    Publication date: July 31, 2014
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Bjorn Stenger, Robert Anderson, Javier Latorre-Martinez, Vincent Ping Leung Wan, Roberto Cipolla
  • Patent number: 8761472
    Abstract: An object location method includes: analyzing data including plural objects each including plural features, and extracting the features from the data; matching features stored in a database with those extracted from the data, and deriving a prediction of the object, each feature extracted from the data providing a vote for at least one prediction; expressing the prediction to be analyzed in a Hough space, the objects to be analyzed being described by n parameters and each parameter defining a dimension of the Hough space, n is an integer of at least one; providing a constraint by applying a higher weighting to votes which agree with votes from other features than those votes which do not agree with votes from other features; finding local maxima in the Hough space using the weighted votes; and identifying the predictions associated with the local maxima to locate the objects provided in the data.
    Type: Grant
    Filed: February 29, 2012
    Date of Patent: June 24, 2014
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Oliver Woodford, Minh-Tri Pham, Atsuto Maki, Frank Perbet, Bjorn Stenger
  • Patent number: 8750614
    Abstract: According to one embodiment, a method of classifying a feature in a video sequence includes selecting a target region of a frame of the video sequence, where the target region contains the feature; dividing the target region into a plurality cells, calculating histograms of optic flow with the cells comparing the histograms of optic flow for pairs of cells; and assigning the feature to a class based at least in part on the result of the comparison.
    Type: Grant
    Filed: September 22, 2011
    Date of Patent: June 10, 2014
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Atsuto Maki, Frank Perbet, Bjorn Stenger, Oliver Woodford, Roberto Cipolla
  • Publication number: 20140125773
    Abstract: A method of calculating a similarity measure between first and second image patches, which include respective first and second intensity values associated with respective elements of the first and second image patches, and which have a corresponding size and shape such that each element of the first image patch corresponds to an element on the second image patch. The method: determines a set of sub-regions on the second image patch corresponding to elements of the first image patch and having first intensity values within a range defined for that sub-region; calculates variance, for each sub-region of the set over all of the elements of that sub-region, of a function of the second intensity value associated with that element and the first intensity value associated with the corresponding element of the first image patch; and calculates similarity measure as the sum over all sub-regions of the calculated variances.
    Type: Application
    Filed: November 5, 2013
    Publication date: May 8, 2014
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Atsuto MAKI, Riccardo Gherardi, Oliver Woodford, Frank Perbet, Minh-Tri Pham, Bjorn Stenger, Sam Johnson, Roberto Cipolla
  • Patent number: 8712154
    Abstract: A method dividing an image into plural superpixels of plural pixels of the image. The method calculates an initial set of weights from a measure of similarity between pairs of pixels, from which a resultant set of weights is calculated for pairs of pixels that are less that a threshold distance apart on the image. The calculation calculates a weight for a pair of pixels as the sum over a set of third pixels of the product of initial weight of the first pixel of the pair of pixel with the third pixel and the weight of the third pixel with the second pixel. Each weight is then subjected to a power coefficient operation. The resultant set of weights and the initial set of weights are then compared to check for convergence. If the weights converge, the converged set of weights is used to divide the image into superpixels.
    Type: Grant
    Filed: February 29, 2012
    Date of Patent: April 29, 2014
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Frank Perbet, Atsuto Maki, Minh-Tri Pham, Bjorn Stenger, Oliver Woodford
  • Publication number: 20130051639
    Abstract: An object location method includes: analysing data including plural objects each including plural features, and extracting the features from the data; matching features stored in a database with those extracted from the data, and deriving a prediction of the object, each feature extracted from the data providing a vote for at least one prediction; expressing the prediction to be analysed in a Hough space, the objects to be analysed being described by n parameters and each parameter defining a dimension of the Hough space, n is an integer of at least one; providing a constraint by applying a higher weighting to votes which agree with votes from other features than those votes which do not agree with votes from other features; finding local maxima in the Hough space using the weighted votes; and identifying the predictions associated with the local maxima to locate the objects provided in the data.
    Type: Application
    Filed: February 29, 2012
    Publication date: February 28, 2013
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Oliver Woodford, Minh-Tri Pham, Atsuto Maki, Frank Perbet, Bjorn Stenger
  • Publication number: 20130016913
    Abstract: A method of comparing two object poses, wherein each object pose is expressed in terms of position, orientation and scale with respect to a common coordinate system, the method comprising: calculating a distance between the two object poses, the distance being calculated using the distance function: d sRt ? ( X , Y ) = d s 2 ? ( X , Y ) ? s 2 + d r 2 ? ( X , Y ) ? r 2 + d t 2 ? ( X , Y ) ? t 2 .
    Type: Application
    Filed: February 28, 2012
    Publication date: January 17, 2013
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Minh-Tri Pham, Oliver Woodford, Frank Perbet, Atsuto Maki, Bjorn Stenger, Roberto Cipolla
  • Publication number: 20120287247
    Abstract: A system for capturing 3D image data of a scene, including three light sources, each configured to emit light at a different wavelength to the other two sources and to illuminate the scene to be captured; a first video camera configured to receive light from the light sources which has been reflected from the scene, to isolate light received from each of the light sources, and to output data relating to the image captured for each of the three light sources; a depth sensor configured to capture depth map data of the scene; and an analysis unit configured to receive data from the first video camera and process the data to obtain data relating to a normal field obtained from the images captured for each of the three light sources, and to combine the normal field data with the depth map data to capture 3D image data of the scene.
    Type: Application
    Filed: February 29, 2012
    Publication date: November 15, 2012
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Bjorn STENGER, Atsuto MAKI, Frank PERBET, Oliver WOODFORD, Roberto CIPOLLA, Robert ANDERSON
  • Publication number: 20120251003
    Abstract: A method dividing an image into plural superpixels of plural pixels of the image. The method calculates an initial set of weights from a measure of similarity between pairs of pixels, from which a resultant set of weights is calculated for pairs of pixels that are less that a threshold distance apart on the image. The calculation calculates a weight for a pair of pixels as the sum over a set of third pixels of the product of initial weight of the first pixel of the pair of pixel with the third pixel and the weight of the third pixel with the second pixel. Each weight is then subjected to a power coefficient operation. The resultant set of weights and the initial set of weights are then compared to check for convergence. If the weights converge, the converged set of weights is used to divide the image into superpixels.
    Type: Application
    Filed: February 29, 2012
    Publication date: October 4, 2012
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Frank Perbet, Atsuto Maki, Minh-Tri Pham, Bjorn Stenger, Oliver Woodford
  • Publication number: 20120224744
    Abstract: A moving feature is recognized in a video sequence by comparing its movement with a characteristic pattern. Possible trajectories through the video sequence are generated for an object by identifying potential matches of points in pairs of frames of the video sequence. When looking for the characteristic pattern, a number of possible trajectories are analyzed. The possible trajectories may be selected so that they are suitable for analysis. This may include selecting longer trajectories that can be easier to analyze. Thereby where the object being tracked is momentarily behind another object a continuous trajectory is generated.
    Type: Application
    Filed: August 6, 2009
    Publication date: September 6, 2012
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Frank Perbet, Atsuto Maki, Bjorn Stenger
  • Publication number: 20120219184
    Abstract: A characteristic motion in a video is identified by determining pairs of moving features that have an indicative relationship between the motions of the two moving features in the pair. For example, the motion of a pedestrian is identified by an indicative relationship between the motions of the pedestrian's feet. This indicative relationship may be that one of the feet moves relative to the surroundings while the other remains stationary.
    Type: Application
    Filed: August 6, 2009
    Publication date: August 30, 2012
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Atsuto Maki, Frank Perbet, Bjorn Stenger
  • Publication number: 20120082381
    Abstract: According to one embodiment, a method of classifying a feature in a video sequence includes selecting a target region of a frame of the video sequence, where the target region contains the feature; dividing the target region into a plurality cells, calculating histograms of optic flow with the cells comparing the histograms of optic flow for pairs of cells; and assigning the feature to a class based at least in part on the result of the comparison.
    Type: Application
    Filed: September 22, 2011
    Publication date: April 5, 2012
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Atsuto MAKI, Frank PERBET, Bjorn STENGER, Oliver WOODFORD, Roberto CIPOLLA
  • Patent number: 7844921
    Abstract: An apparatus for a user to interface with a control object apparatus by a posture or a motion of the user's physical part. An image input unit inputs an image including the user's physical part. A gesture recognition unit recognizes the posture or the motion of the user's physical part from the image. A control unit controls the control object apparatus based on an indication corresponding to the posture or the motion. A gesture information display unit displays an exemplary image of the posture or the motion recognized for the user's reference to indicate the control object apparatus.
    Type: Grant
    Filed: June 1, 2007
    Date of Patent: November 30, 2010
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Tsukasa Ike, Yasuhiro Taniguchi, Ryuzo Okada, Nobuhisa Kishikawa, Kentaro Yokoi, Mayumi Yuasa, Bjorn Stenger
  • Publication number: 20080052643
    Abstract: An apparatus for a user to interface with a control object apparatus by a posture or a motion of the user's physical part. An image input unit inputs an image including the user's physical part. A gesture recognition unit recognizes the posture or the motion of the user's physical part from the image. A control unit controls the control object apparatus based on an indication corresponding to the posture or the motion. A gesture information display unit displays an exemplary image of the posture or the motion recognized for the user's reference to indicate the control object apparatus.
    Type: Application
    Filed: June 1, 2007
    Publication date: February 28, 2008
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Tsukasa IKE, Yasuhiro Taniguchi, Ryuzo Okada, Nobuhisa Kishikawa, Kentaro Yokoi, Mayumi Yuasa, Bjorn Stenger
  • Publication number: 20060284837
    Abstract: A similarity calculation unit calculates a similarity between a hand candidate area image and a template image. A consistency probability calculation unit and an inconsistency probability calculation unit use probability distributions of similarities of a case where hand shapes of the template image and the hand candidate area image are consistent with each other and a case where they are not consistent, and calculate a consistency probability and an inconsistency probability of hand shapes between each of the template images and the hand candidate area image. A hand shape determination unit determines a hand shape most similar to the hand candidate area image based on the consistency probability and the inconsistency probability calculated for each hand shape, and outputs it as a recognition result.
    Type: Application
    Filed: June 8, 2006
    Publication date: December 21, 2006
    Inventors: Bjorn Stenger, Tsukasa Ike