Patents by Inventor Relja Markovic

Relja Markovic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10691216
    Abstract: Systems, methods and computer readable media are disclosed for gesture input beyond skeletal. A user's movement or body position is captured by a capture device of a system. Further, non-user-position data is received by the system, such as controller input by the user, an item that the user is wearing, a prop under the control of the user, or a second user's movement or body position. The system incorporates both the user-position data and the non-user-position data to determine one or more inputs the user made to the system.
    Type: Grant
    Filed: June 15, 2016
    Date of Patent: June 23, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Kevin Geisner, Stephen Latta, Relja Markovic, Gregory N. Snook
  • Patent number: 10460445
    Abstract: To digitize an object, a camera captures images of different sides of the object with color and depth data. At least two different sides of the object are identified from the images, and constructions are created of the sides of the object from the images. Points of the constructions to connect to one another are determined and used to align the constructions. The construction are merged to generate a rendition of the object. Various techniques are applied to extrapolate edges, remove seams, extend color intelligently, filter noise, apply skeletal structure to the object, and optimize the digitization further. The rendition of the object can be provided for display as a digital representation of the object and potentially used in different applications (e.g., games, Web, etc.).
    Type: Grant
    Filed: April 4, 2018
    Date of Patent: October 29, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jeffrey Jesus Evertt, Justin Avram Clark, Christopher Harley Willoughby, Mike Scavezze, Joel Deaguero, Relja Markovic, Joe Sola, David Haley
  • Patent number: 10223832
    Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: March 5, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
  • Publication number: 20180225829
    Abstract: To digitize an object, a camera captures images of different sides of the object with color and depth data. At least two different sides of the object are identified from the images, and constructions are created of the sides of the object from the images. Points of the constructions to connect to one another are determined and used to align the constructions. The construction are merged to generate a rendition of the object. Various techniques are applied to extrapolate edges, remove seams, extend color intelligently, filter noise, apply skeletal structure to the object, and optimize the digitization further. The rendition of the object can be provided for display as a digital representation of the object and potentially used in different applications (e.g., games, Web, etc.).
    Type: Application
    Filed: April 4, 2018
    Publication date: August 9, 2018
    Inventors: Jeffrey Jesus EVERTT, Justin Avram CLARK, Christopher Harley WILLOUGHBY, Mike SCAVEZZE, Joel DEAGUERO, Relja MARKOVIC, Joe SOLA, David HALEY
  • Patent number: 9953426
    Abstract: Digitizing objects in a picture is discussed herein. A user presents the object to a camera, which captures the image comprising color and depth data for the front and back of the object. For both front and back images, the closest point to the camera is determined by analyzing the depth data. From the closest points, edges of the object are found by noting large differences in depth data. The depth data is also used to construct point cloud constructions of the front and back of the object. Various techniques are applied to extrapolate edges, remove seams, extend color intelligently, filter noise, apply skeletal structure to the object, and optimize the digitization further. Eventually, a digital representation is presented to the user and potentially used in different applications (e.g., games, Web, etc.).
    Type: Grant
    Filed: September 3, 2015
    Date of Patent: April 24, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jeffrey Jesus Evertt, Justin Avram Clark, Christopher Harley Willoughby, Mike Scavezze, Joel Deaguero, Relja Markovic, Joe Sola, David Haley
  • Patent number: 9910509
    Abstract: Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.
    Type: Grant
    Filed: November 15, 2016
    Date of Patent: March 6, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Relja Markovic, Gregory N. Snook, Stephen Latta, Kevin Geisner, Johnny Lee, Adam Jethro Langridge
  • Patent number: 9848106
    Abstract: Implementations for identifying, capturing, and presenting high-quality photo-representations of acts occurring during play of a game that employs motion tracking input technology are disclosed. As one example, a method is disclosed that includes capturing via an optical interface, a plurality of photographs of a player in a capture volume during play of the electronic game. The method further includes for each captured photograph of the plurality of captured photographs, comparing an event-based scoring parameter to an event depicted by or corresponding to the captured photograph. The method further includes assigning respective scores to the plurality of captured photographs based, at least in part, on the comparison to the even-based scoring parameter. The method further includes associating the captured photographs at an electronic storage media with the respective scores assigned to the captured photographs.
    Type: Grant
    Filed: December 21, 2010
    Date of Patent: December 19, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mike Scavezze, Arthur Tomlin, Relja Markovic, Stephen Latta, Kevin Geisner
  • Patent number: 9824480
    Abstract: In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate.
    Type: Grant
    Filed: September 16, 2016
    Date of Patent: November 21, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook
  • Patent number: 9821224
    Abstract: Depth-image analysis is performed with a device that analyzes a human target within an observed scene by capturing depth-images that include depth information from the observed scene. The human target is modeled with a virtual skeleton including a plurality of joints. The virtual skeleton is used as an input for controlling a driving simulation.
    Type: Grant
    Filed: December 21, 2010
    Date of Patent: November 21, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic, Kudo Tsunoda, Rhett Mathis, Matthew Monson, David Gierok, William Paul Giese, Darrin Brown, Cam McRae, David Seymour, William Axel Olsen, Matthew Searcy
  • Patent number: 9761057
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: September 12, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Thomas George Salter, Ben Sugden, Daniel Deptford, Robert Crocco, Jr., Brian Keane, Laura Massey, Alex Kipman, Peter Tobias Kinnebrew, Nicholas Kamuda, Zachary Quarles, Michael Scavezze, Ryan Hastings, Cameron Brown, Tony Ambrus, Jason Scott, John Bevis, Jamie B. Kirschenbaum, Nicholas Gervase Fajt, Michael Klucher, Relja Markovic, Stephen Latta, Daniel McCulloch
  • Publication number: 20170123505
    Abstract: Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.
    Type: Application
    Filed: November 15, 2016
    Publication date: May 4, 2017
    Inventors: Relja Markovic, Gregory N. Snook, Stephen Latta, Kevin Geisner, Johnny Lee, Adam Jethro Langridge
  • Publication number: 20170069143
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.
    Type: Application
    Filed: November 21, 2016
    Publication date: March 9, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Thomas George Salter, Ben Sugden, Daniel Deptford, Robert Crocco, JR., Brian Keane, Laura Massey, Alex Kipman, Peter Tobias Kinnebrew, Nicholas Kamuda, Zachary Quarles, Michael Scavezze, Ryan Hastings, Cameron Brown, Tony Ambrus, Jason Scott, John Bevis, Jamie B. Kirschenbaum, Nicholas Gervase Fajt, Michael Klucher, Relja Markovic, Stephen Latta, Daniel McCulloch
  • Publication number: 20170069125
    Abstract: In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate.
    Type: Application
    Filed: September 16, 2016
    Publication date: March 9, 2017
    Inventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook
  • Publication number: 20160378197
    Abstract: A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control.
    Type: Application
    Filed: September 8, 2016
    Publication date: December 29, 2016
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic
  • Patent number: 9524024
    Abstract: Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.
    Type: Grant
    Filed: January 21, 2014
    Date of Patent: December 20, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Relja Markovic, Gregory N. Snook, Stephen Latta, Kevin Geisner, Johnny Lee, Adam Jethro Langridge
  • Patent number: 9519828
    Abstract: A system may receive image data and capture motion with respect to a target in a physical space and recognize a gesture from the captured motion. It may be desirable to isolate aspects of captured motion to differentiate random and extraneous motions. For example, a gesture may comprise motion of a user's right arm, and it may be desirable to isolate the motion of the user's right arm and exclude an interpretation of any other motion. Thus, the isolated aspect may be the focus of the received data for gesture recognition. Alternately, the isolated aspects may be an aspect of the captured motion that is removed from consideration when identifying a gesture from the captured motion. For example, gesture filters may be modified to correspond to the user's natural lean to eliminate the effect the lean has on the registry of a motion with a gesture filter.
    Type: Grant
    Filed: December 22, 2014
    Date of Patent: December 13, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Gregory Nelson Snook, Relja Markovic, Stephen Gilchrist Latta, Kevin Geisner
  • Patent number: 9501873
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.
    Type: Grant
    Filed: July 22, 2015
    Date of Patent: November 22, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Thomas George Salter, Ben Sugden, Daniel Deptford, Robert Crocco, Jr., Brian Keane, Laura Massey, Alex Kipman, Peter Tobias Kinnebrew, Nicholas Kamuda, Zachary Quarles, Michael Scavezze, Ryan Hastings, Cameron Brown, Tony Ambrus, Jason Scott, John Bevis, Jamie B. Kirschenbaum, Nicholas Gervase Fajt, Michael Klucher, Relja Markovic, Stephen Latta, Daniel McCulloch
  • Patent number: 9498718
    Abstract: Disclosed herein are systems and methods for altering a view perspective within a display environment. For example, gesture data corresponding to a plurality of inputs may be stored. The input may be input into a game or application implemented by a computing device. Images of a user of the game or application may be captured. For example, a suitable capture device may capture several images of the user over a period of time. The images may be analyzed and processed for detecting a user's gesture. Aspects of the user's gesture may be compared to the stored gesture data for determining an intended gesture input for the user. The comparison may be part of an analysis for determining inputs corresponding to the gesture data, where one or more of the inputs are input into the game or application and cause a view perspective within the display environment to be altered.
    Type: Grant
    Filed: May 29, 2009
    Date of Patent: November 22, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephen G. Latta, Gregory N. Snook, Justin McBride, Arthur Charles Tomlin, Peter Sarrett, Kevin Geisner, Relja Markovic, Christopher Vuchetich
  • Patent number: 9489053
    Abstract: A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control.
    Type: Grant
    Filed: February 26, 2015
    Date of Patent: November 8, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic
  • Patent number: 9478057
    Abstract: In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate.
    Type: Grant
    Filed: February 9, 2015
    Date of Patent: October 25, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta