Patents by Inventor Relja Markovic

Relja Markovic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9400559
    Abstract: Systems, methods and computer readable media are disclosed for gesture shortcuts. A user's movement or body position is captured by a capture device of a system, and is used as input to control the system. For a system-recognized gesture, there may be a full version of the gesture and a shortcut of the gesture. Where the system recognizes that either the full version of the gesture or the shortcut of the gesture has been performed, it sends an indication that the system-recognized gesture was observed to a corresponding application. Where the shortcut comprises a subset of the full version of the gesture, and both the shortcut and the full version of the gesture are recognized as the user performs the full version of the gesture, the system recognizes that only a single performance of the gesture has occurred, and indicates to the application as such.
    Type: Grant
    Filed: May 29, 2009
    Date of Patent: July 26, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephen Latta, Kevin Geisner, John Clavin, Kudo Tsunoda, Kathryn Stone Perez, Alex Kipman, Relja Markovic, Gregory N. Snook
  • Patent number: 9383823
    Abstract: Systems, methods and computer readable media are disclosed for gesture input beyond skeletal. A user's movement or body position is captured by a capture device of a system. Further, non-user-position data is received by the system, such as controller input by the user, an item that the user is wearing, a prop under the control of the user, or a second user's movement or body position. The system incorporates both the user-position data and the non-user-position data to determine one or more inputs the user made to the system.
    Type: Grant
    Filed: May 29, 2009
    Date of Patent: July 5, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin Geisner, Stephen Latta, Relja Markovic, Gregory N. Snook
  • Patent number: 9377857
    Abstract: A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. Providing visual feedback representing instructional gesture data to the user can teach the user how to properly gesture. The visual feedback may be provided in any number of suitable ways. For example, visual feedback may be provided via ghosted images, player avatars, or skeletal representations. The system can process prerecorded or live content for displaying visual feedback representing instructional gesture data. The feedback can portray the deltas between the user's actual position and the ideal gesture position.
    Type: Grant
    Filed: May 1, 2009
    Date of Patent: June 28, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook, Darren Bennett
  • Patent number: 9369543
    Abstract: Synchronous and asynchronous communications between avatars is allowed. For synchronous communications, when multiple users are playing different games of the same game title and when the avatars of the multiple users are at the same location in their respective games they can communicate with one another, thus allowing the users of those avatars to communicate with one another. For asynchronous communications, an avatar of a particular user is left behind at a particular location in a game along with a recorded communication. When other users of other games are at that particular location, the avatar of that particular user is displayed and the recorded communication is presented to the other users.
    Type: Grant
    Filed: May 27, 2011
    Date of Patent: June 14, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Brian Scott Murphy, Stephen G. Latta, Darren Alexander Bennett, Pedro Perez, Shawn C. Wright, Relja Markovic, Joel B. Deaguero, Christopher H. Willoughby, Ryan Lucas Hastings, Kevin Geisner
  • Patent number: 9336625
    Abstract: Digitizing objects in a picture is discussed herein. A user presents the object to a camera, which captures the image comprising color and depth data for the front and back of the object. The object is recognized and digitized using color and depth data of the image. The user's client queries a server managing images uploaded by other users for virtual renditions of the object, as recognized in the other images. The virtual renditions from the other images are merged with the digitized version of the object in the image captured by the user to create a composite rendition of the object.
    Type: Grant
    Filed: October 25, 2011
    Date of Patent: May 10, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jeffrey Jesus Evertt, Justin Avram Clark, Christopher Harley Willoughby, Joel Deaguero, Relja Markovic
  • Patent number: 9298263
    Abstract: A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. Providing visual feedback representing instructional gesture data to the user can teach the user how to properly gesture. The visual feedback may be provided in any number of suitable ways. For example, visual feedback may be provided via ghosted images, player avatars, or skeletal representations. The system can process prerecorded or live content for displaying visual feedback representing instructional gesture data. The feedback can portray the deltas between the user's actual position and the ideal gesture position.
    Type: Grant
    Filed: October 27, 2010
    Date of Patent: March 29, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook, Darren Bennett
  • Publication number: 20160086382
    Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.
    Type: Application
    Filed: September 25, 2015
    Publication date: March 24, 2016
    Inventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
  • Patent number: 9292083
    Abstract: Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.
    Type: Grant
    Filed: May 29, 2014
    Date of Patent: March 22, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jeffrey Evertt, Joel Deaguero, Darren Bennett, Dylan Vance, David Galloway, Relja Markovic, Stephen Latta, Oscar Omar Garza Santos, Kevin Geisner
  • Patent number: 9288468
    Abstract: Techniques are provided for viewing windows for video streams. A video stream from a video capture device is accessed. Data that describes movement or position of a person is accessed. A viewing window is placed in the video stream based on the data that describes movement or position of the person. The viewing window is provided to a display device in accordance with the placement of the viewing window in the video stream. Motion sensors can detect motion of the person carrying the video capture device in order to dampen the motion such that the video on the remote display does not suffer from motion artifacts. Sensors can also track the eye gaze of either the person carrying the mobile video capture device or the remote display device to enable control of the spatial region of the video stream shown at the display device.
    Type: Grant
    Filed: June 29, 2011
    Date of Patent: March 15, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Benjamin I. Vaught, Alex Aben-Athar Kipman, Michael J. Scavezze, Arthur C. Tomlin, Relja Markovic, Darren Bennett, Stephen G. Latta
  • Patent number: 9280203
    Abstract: Systems, methods and computer readable media are disclosed for a gesture recognizer system architecture. A recognizer engine is provided, which receives user motion data and provides that data to a plurality of filters. A filter corresponds to a gesture, that may then be tuned by an application receiving information from the gesture recognizer so that the specific parameters of the gesture—such as an arm acceleration for a throwing gesture—may be set on a per-application level, or multiple times within a single application. Each filter may output to an application using it a confidence level that the corresponding gesture occurred, as well as further details about the user motion data.
    Type: Grant
    Filed: August 2, 2011
    Date of Patent: March 8, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephen G. Latta, Relja Markovic, Arthur Charles Tomlin, Gregory N. Snook
  • Patent number: 9256282
    Abstract: Systems, methods and computer readable media are disclosed for manipulating virtual objects. A user may utilize a controller, such as his hand, in physical space to associate with a cursor in a virtual environment. As the user manipulates the controller in physical space, this is captured by a depth camera. The image data from the depth camera is parsed to determine how the controller is manipulated, and a corresponding manipulation of the cursor is performed in virtual space. Where the cursor interacts with a virtual object in the virtual space, that virtual object is manipulated by the cursor.
    Type: Grant
    Filed: March 20, 2009
    Date of Patent: February 9, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephen G. Latta, Kevin Geisner, Relja Markovic, Darren Alexander Bennett, Arthur Charles Tomlin
  • Publication number: 20150379719
    Abstract: Digitizing objects in a picture is discussed herein. A user presents the object to a camera, which captures the image comprising color and depth data for the front and back of the object. For both front and back images, the closest point to the camera is determined by analyzing the depth data. From the closest points, edges of the object are found by noting large differences in depth data. The depth data is also used to construct point cloud constructions of the front and back of the object. Various techniques are applied to extrapolate edges, remove seams, extend color intelligently, filter noise, apply skeletal structure to the object, and optimize the digitization further. Eventually, a digital representation is presented to the user and potentially used in different applications (e.g., games, Web, etc.).
    Type: Application
    Filed: September 3, 2015
    Publication date: December 31, 2015
    Inventors: JEFF EVERTT, JUSTIN CLARK, CHRISTOPHER HARLEY WILLOUGHBY, MIKE SCAVEZZE, JOEL DEAGUERO, RELJA MARKOVIC, JOE SOLA, DAVID HALEY
  • Patent number: 9208571
    Abstract: Digitizing objects in a picture is discussed herein. A user presents the object to a camera, which captures the image comprising color and depth data for the front and back of the object. For both front and back images, the closest point to the camera is determined by analyzing the depth data. From the closest points, edges of the object are found by noting large differences in depth data. The depth data is also used to construct point cloud constructions of the front and back of the object. Various techniques are applied to extrapolate edges, remove seams, extend color intelligently, filter noise, apply skeletal structure to the object, and optimize the digitization further. Eventually, a digital representation is presented to the user and potentially used in different applications (e.g., games, Web, etc.).
    Type: Grant
    Filed: March 2, 2012
    Date of Patent: December 8, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jeffrey Jesus Evertt, Justin Avram Clark, Christopher Harley Willoughby, Mike Scavezze, Joel Deaguero, Relja Markovic, Joe Sola, David Haley
  • Patent number: 9195305
    Abstract: Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and determines a probabilistic measure of the person's intent to engage of disengage with the application based on location, stance and movement. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another.
    Type: Grant
    Filed: November 8, 2012
    Date of Patent: November 24, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, Jonathan T Steed, Darren A Bennett, Amos D Vance
  • Publication number: 20150325054
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.
    Type: Application
    Filed: July 22, 2015
    Publication date: November 12, 2015
    Inventors: Thomas George Salter, Ben Sugden, Daniel Deptford, Robert Crocco, JR., Brian Keane, Laura Massey, Alex Kipman, Peter Tobias Kinnebrew, Nicholas Kamuda, Zachary Quarles, Michael Scavezze, Ryan Hastings, Cameron Brown, Tony Ambrus, Jason Scott, John Bevis, Jamie B. Kirschenbaum, Nicholas Gervase Fajt, Michael Klucher, Relja Markovic, Stephen Latta, Daniel McCulloch
  • Patent number: 9153195
    Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.
    Type: Grant
    Filed: January 30, 2012
    Date of Patent: October 6, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
  • Patent number: 9129430
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.
    Type: Grant
    Filed: June 25, 2013
    Date of Patent: September 8, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Thomas George Salter, Ben Sugden, Daniel Deptford, Robert Crocco, Jr., Brian Keane, Laura Massey, Alex Kipman, Peter Tobias Kinnebrew, Nicholas Kamuda, Zachary Quarles, Michael Scavezze, Ryan Hastings, Cameron Brown, Tony Ambrus, Jason Scott, John Bevis, Jamie B. Kirschenbaum, Nicholas Gervase Fajt, Michael Klucher, Relja Markovic, Stephen Latta, Daniel McCulloch
  • Patent number: 9098873
    Abstract: An on-screen shopping application which reacts to a human target user's motions to provide a shopping experience to the user is provided. A tracking system captures user motions and executes a shopping application allowing a user to manipulate an on-screen representation the user. The on-screen representation has a likeness of the user or another individual and movements of the user in the on-screen interface allows the user to interact with virtual articles that represent real-world articles. User movements which are recognized as article manipulation or transaction control gestures are translated into commands for the shopping application.
    Type: Grant
    Filed: April 1, 2010
    Date of Patent: August 4, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin A. Geisner, Kudo Tsunoda, Darren Bennett, Brian S. Murphy, Stephen G. Latta, Relja Markovic, Alex Kipman
  • Publication number: 20150212585
    Abstract: A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control.
    Type: Application
    Filed: February 26, 2015
    Publication date: July 30, 2015
    Inventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic
  • Patent number: 9069381
    Abstract: A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game.
    Type: Grant
    Filed: March 2, 2012
    Date of Patent: June 30, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin Geisner, Relja Markovic, Stephen G. Latta, Mark T. Mihelich, Christopher Willoughby, Jonathan T. Steed, Darren Bennett, Shawn C. Wright, Matt Coohill