Patents by Inventor Wolf Kienzle

Wolf Kienzle has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10592778
    Abstract: A method of object detection includes receiving a first image taken from a first perspective by a first camera and receiving a second image taken from a second perspective, different from the first perspective, by a second camera. Each pixel in the first image is offset relative to a corresponding pixel in the second image by a predetermined offset distance resulting in offset first and second images. A particular pixel of the offset first image depicts a same object locus as a corresponding pixel in the offset second image only if the object locus is at an expected object-detection distance from the first and second cameras. The method includes recognizing that a target object is imaged by the particular pixel of the offset first image and the corresponding pixel of the offset second image.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: March 17, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: David Nister, Piotr Dollar, Wolf Kienzle, Mladen Radojevic, Matthew S. Ashman, Ivan Stojiljkovic, Magdalena Vukosavljevic
  • Patent number: 10558037
    Abstract: Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, a system is provided that receives an input from a user of a mobile machine which indicates or describes an object in the world. In one example, the user may gesture to the object which is detected by a visual sensor. In another example, the user may verbally describe the object which is detected by an audio sensor. The system receiving the input may then determine which object near the location of the user that the user is indicating. Such a determination may include utilizing known objects near the geographic location of the user or the autonomous or mobile machine.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: February 11, 2020
    Inventors: Patrick S. Piemonte, Wolf Kienzle, Douglas Bowman, Shaun D. Budhram, Madhurani R. Sapre, Vyacheslav Leizerovich, Daniel De Rocha Rosario
  • Publication number: 20200026288
    Abstract: Signals usable to determine a path of a vehicle towards a particular stopping point in a vicinity of a destination are detected from an individual authorized to provide guidance with respect to movements of the vehicle. Based at least in part on the signals and a data set pertaining to the external environment of the vehicle, one or more vehicular movements to be implemented to proceed along the path are identified. A directive is transmitted to a motion control subsystem of the vehicle to initiate one of the vehicular movements.
    Type: Application
    Filed: August 2, 2019
    Publication date: January 23, 2020
    Applicant: Apple Inc.
    Inventors: Scott M. Herz, Sawyer I. Cohen, Samuel B. Gooch, Wolf Kienzle, Karlin Y. Bark, Jack J. Wanderman
  • Patent number: 10372132
    Abstract: Signals usable to determine a path of a vehicle towards a particular stopping point in a vicinity of a destination are detected from an individual authorized to provide guidance with respect to movements of the vehicle. Based at least in part on the signals and a data set pertaining to the external environment of the vehicle, one or more vehicular movements to be implemented to proceed along the path are identified. A directive is transmitted to a motion control subsystem of the vehicle to initiate one of the vehicular movements.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: August 6, 2019
    Assignee: Apple Inc.
    Inventors: Scott M. Herz, Sawyer I. Cohen, Samuel B. Gooch, Wolf Kienzle, Karlin Y. Bark, Jack J. Wanderman
  • Publication number: 20180197047
    Abstract: A method of object detection includes receiving a first image taken from a first perspective by a first camera and receiving a second image taken from a second perspective, different from the first perspective, by a second camera. Each pixel in the first image is offset relative to a corresponding pixel in the second image by a predetermined offset distance resulting in offset first and second images. A particular pixel of the offset first image depicts a same object locus as a corresponding pixel in the offset second image only if the object locus is at an expected object-detection distance from the first and second cameras. The method includes recognizing that a target object is imaged by the particular pixel of the offset first image and the corresponding pixel of the offset second image.
    Type: Application
    Filed: March 6, 2018
    Publication date: July 12, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: David Nister, Piotr Dollar, Wolf Kienzle, Mladen Radojevic, Matthew S. Ashman, Ivan Stojiljkovic, Magdalena Vukosavljevic
  • Publication number: 20180164817
    Abstract: Signals usable to determine a path of a vehicle towards a particular stopping point in a vicinity of a destination are detected from an individual authorized to provide guidance with respect to movements of the vehicle. Based at least in part on the signals and a data set pertaining to the external environment of the vehicle, one or more vehicular movements to be implemented to proceed along the path are identified. A directive is transmitted to a motion control subsystem of the vehicle to initiate one of the vehicular movements.
    Type: Application
    Filed: November 29, 2017
    Publication date: June 14, 2018
    Applicant: Apple Inc.
    Inventors: Scott M. Herz, Sawyer I. Cohen, Samuel B. Gooch, Wolf Kienzle, Karlin Y. Bark, Jack J. Wanderman
  • Patent number: 9973727
    Abstract: Various technologies described herein pertain to creation of an output hyper-lapse video from an input video. Values indicative of overlaps between pairs of frames in the input video are computed. A value indicative of an overlap between a pair of frames can be computed based on a sparse set of points from each of the frames in the pair. Moreover, a subset of the frames from the input video are selected based on the values of the overlaps between the pairs of the frames in the input video and a target frame speed-up rate. Further, the output hyper-lapse video is generated based on the subset of the frames. The output hyper-lapse video can be generated without a remainder of the frames of the input video other than the subset of the frames.
    Type: Grant
    Filed: August 2, 2017
    Date of Patent: May 15, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Neel Suresh Joshi, Wolf Kienzle, Michael A. Toelle, Matthieu Uyttendaele, Michael F. Cohen
  • Publication number: 20180129897
    Abstract: A “Stroke Untangler” composes handwritten messages from handwritten strokes representing overlapping letters or partial letter segments are drawn on a touchscreen device or touch-sensitive surface. These overlapping strokes are automatically untangled and then segmented and combined into one or more letters, words, or phrases. Advantageously, segmentation and composition is performed without requiring user gestures, timeouts, or other inputs to delimit characters within words, and without using handwriting recognition-based techniques to guide untangling and composing of the overlapping strokes to form characters. In other words, the user draws multiple overlapping strokes. Those strokes are then automatically segmented and combined into one or more corresponding characters. Text recognition of the resulting characters is then performed. Further, the segmentation and combination is performed in real-time, thereby enabling real-time rendering of the resulting characters in a user interface window.
    Type: Application
    Filed: January 8, 2018
    Publication date: May 10, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kenneth Paul Hinckley, Wolf Kienzle, Mudit Agrawal
  • Patent number: 9934451
    Abstract: A method of object detection includes receiving a first image taken by a first stereo camera, receiving a second image taken by a second stereo camera, and offsetting the first image relative to the second image by an offset distance selected such that each corresponding pixel of offset first and second images depict a same object locus if the object locus is at an assumed distance from the first and second stereo cameras. The method further includes locating a target object in the offset first and second images.
    Type: Grant
    Filed: June 25, 2013
    Date of Patent: April 3, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: David Nister, Piotr Dollar, Wolf Kienzle, Mladen Radojevic, Matthew S. Ashman, Ivan Stojiljkovic, Magdalena Vukosavljevic
  • Publication number: 20180088324
    Abstract: Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, a system is provided that receives an input from a user of a mobile machine which indicates or describes an object in the world. In one example, the user may gesture to the object which is detected by a visual sensor. In another example, the user may verbally describe the object which is detected by an audio sensor. The system receiving the input may then determine which object near the location of the user that the user is indicating. Such a determination may include utilizing known objects near the geographic location of the user or the autonomous or mobile machine.
    Type: Application
    Filed: September 20, 2017
    Publication date: March 29, 2018
    Inventors: Patrick S. Piemonte, Wolf Kienzle, Douglas Bowman, Shaun D. Budhram, Madhurani R. Sapre, Vyacheslav Leizerovich, Daniel De Rocha Rosario
  • Publication number: 20180046851
    Abstract: A first set of signals corresponding to a first signal modality (such as the direction of a gaze) during a time interval is collected from an individual. A second set of signals corresponding to a different signal modality (such as hand-pointing gestures made by the individual) is also collected. In response to a command, where the command does not identify a particular object to which the command is directed, the first and second set of signals is used to identify candidate objects of interest, and an operation associated with a selected object from the candidates is performed.
    Type: Application
    Filed: August 14, 2017
    Publication date: February 15, 2018
    Applicant: Apple Inc.
    Inventors: Wolf Kienzle, Douglas A. Bowman
  • Patent number: 9880620
    Abstract: The description relates to a smart ring. In one example, the smart ring can be configured to be worn on a first segment of a finger of a user. The example smart ring can include at least one flexion sensor secured to the smart ring in a manner that can detect a distance between the at least one flexion sensor and a second segment of the finger. The example smart ring can also include an input component configured to analyze signals from the at least one flexion sensor to detect a pose of the finger.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: January 30, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Wolf Kienzle, Kenneth P. Hinkley
  • Patent number: 9881224
    Abstract: A “Stroke Untangler” composes handwritten messages from handwritten strokes representing overlapping letters or partial letter segments are drawn on a touchscreen device or touch-sensitive surface. These overlapping strokes are automatically untangled and then segmented and combined into one or more letters, words, or phrases. Advantageously, segmentation and composition is performed without requiring user gestures, timeouts, or other inputs to delimit characters within words, and without using handwriting recognition-based techniques to guide untangling and composing of the overlapping strokes to form characters. In other words, the user draws multiple overlapping strokes. Those strokes are then automatically segmented and combined into one or more corresponding characters. Text recognition of the resulting characters is then performed. Further, the segmentation and combination is performed in real-time, thereby enabling real-time rendering of the resulting characters in a user interface window.
    Type: Grant
    Filed: December 17, 2013
    Date of Patent: January 30, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Wolf Kienzle, Kenneth Paul Hinckley, Mudit Agrawal
  • Patent number: 9881354
    Abstract: Described is a technology by which an image such as a stitched panorama is automatically cropped based upon predicted quality data with respect to filling missing pixels. The image may be completed, including by completing only those missing pixels that remain after cropping. Predicting quality data may be based on using restricted search spaces corresponding to the missing pixels. The crop is computed based upon the quality data, in which the crop is biased towards including original pixels and excluding predicted low quality pixels. Missing pixels are completed by using restricted search spaces to find replacement values for the missing pixels, and may use histogram matching for texture synthesis.
    Type: Grant
    Filed: March 15, 2012
    Date of Patent: January 30, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Johannes Peter Kopf, Sing Bing Kang, Wolf Kienzle, Steven M. Drucker
  • Patent number: 9846536
    Abstract: Technologies pertaining to composing, displaying, and/or transmitting handwritten content through utilization of a touch-sensitive display screen of a mobile computing device are described herein. A user of the mobile computing device can set forth strokes on the touch-sensitive display screen, one on top of another, wherein such strokes correspond to different handwritten characters. Stroke segmentation can be undertaken to determine which strokes correspond to which characters in a handwritten sequence of characters.
    Type: Grant
    Filed: December 17, 2012
    Date of Patent: December 19, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Wolf Kienzle, Kenneth Paul Hinckley
  • Publication number: 20170359548
    Abstract: Various technologies described herein pertain to creation of an output hyper-lapse video from an input video. Values indicative of overlaps between pairs of frames in the input video are computed. A value indicative of an overlap between a pair of frames can be computed based on a sparse set of points from each of the frames in the pair. Moreover, a subset of the frames from the input video are selected based on the values of the overlaps between the pairs of the frames in the input video and a target frame speed-up rate. Further, the output hyper-lapse video is generated based on the subset of the frames. The output hyper-lapse video can be generated without a remainder of the frames of the input video other than the subset of the frames.
    Type: Application
    Filed: August 2, 2017
    Publication date: December 14, 2017
    Inventors: Neel Suresh Joshi, Wolf Kienzle, Michael A. Toelle, Matthieu Uyttendaele, Michael F. Cohen
  • Patent number: 9762846
    Abstract: Various technologies described herein pertain to creation of an output hyper-lapse video from an input video. Values indicative of overlaps between pairs of frames in the input video are computed. A value indicative of an overlap between a pair of frames can be computed based on a sparse set of points from each of the frames in the pair. Moreover, a subset of the frames from the input video are selected based on the values of the overlaps between the pairs of the frames in the input video and a target frame speed-up rate. Further, the output hyper-lapse video is generated based on the subset of the frames. The output hyper-lapse video can be generated without a remainder of the frames of the input video other than the subset of the frames.
    Type: Grant
    Filed: May 8, 2015
    Date of Patent: September 12, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Neel Suresh Joshi, Wolf Kienzle, Michael A. Toelle, Matthieu Uyttendaele, Michael F. Cohen
  • Patent number: 9582076
    Abstract: The description relates to a smart ring. In one example, the smart ring can be configured to be worn on a first segment of a finger of a user. The example smart ring can include at least one flexion sensor secured to the smart ring in a manner that can detect a distance between the at least one flexion sensor and a second segment of the finger. The example smart ring can also include an input component configured to analyze signals from the at least one flexion sensor to detect a pose of the finger.
    Type: Grant
    Filed: September 17, 2014
    Date of Patent: February 28, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Wolf Kienzle, Kenneth P. Hinckley
  • Publication number: 20170024008
    Abstract: The description relates to a smart ring. In one example, the smart ring can be configured to be worn on a first segment of a finger of a user. The example smart ring can include at least one flexion sensor secured to the smart ring in a manner that can detect a distance between the at least one flexion sensor and a second segment of the finger. The example smart ring can also include an input component configured to analyze signals from the at least one flexion sensor to detect a pose of the finger.
    Type: Application
    Filed: October 6, 2016
    Publication date: January 26, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Wolf KIENZLE, Kenneth P. HINKLEY
  • Publication number: 20160330399
    Abstract: Various technologies described herein pertain to creation of an output hyper-lapse video from an input video. Values indicative of overlaps between pairs of frames in the input video are computed. A value indicative of an overlap between a pair of frames can be computed based on a sparse set of points from each of the frames in the pair. Moreover, a subset of the frames from the input video are selected based on the values of the overlaps between the pairs of the frames in the input video and a target frame speed-up rate. Further, the output hyper-lapse video is generated based on the subset of the frames. The output hyper-lapse video can be generated without a remainder of the frames of the input video other than the subset of the frames.
    Type: Application
    Filed: May 8, 2015
    Publication date: November 10, 2016
    Inventors: Neel Suresh Joshi, Wolf Kienzle, Michael A. Toelle, Matthieu Uyttendaele, Michael F. Cohen