Patents by Inventor Relja Markovic
Relja Markovic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8782567Abstract: Systems, methods and computer readable media are disclosed for a gesture recognizer system architecture. A recognizer engine is provided, which receives user motion data and provides that data to a plurality of filters. A filter corresponds to a gesture, that may then be tuned by an application receiving information from the gesture recognizer so that the specific parameters of the gesture—such as an arm acceleration for a throwing gesture—may be set on a per-application level, or multiple times within a single application. Each filter may output to an application using it a confidence level that the corresponding gesture occurred, as well as further details about the user motion data.Type: GrantFiled: November 4, 2011Date of Patent: July 15, 2014Assignee: Microsoft CorporationInventors: Stephen G. Latta, Relja Markovic, Arthur Charles Tomlin, Gregory N. Snook
-
Publication number: 20140168075Abstract: Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.Type: ApplicationFiled: January 21, 2014Publication date: June 19, 2014Inventors: Relja Markovic, Gregory N. Snook, Stephen Latta, Kevin Geisner, Johnny Lee, Adam Jethro Langridge
-
Patent number: 8749557Abstract: Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.Type: GrantFiled: June 11, 2010Date of Patent: June 10, 2014Assignee: Microsoft CorporationInventors: Jeffrey Evertt, Joel Deaguero, Darren Bennett, Dylan Vance, David Galloway, Relja Markovic, Stephen Latta, Oscar Omar Garza Santos, Kevin Geisner
-
Patent number: 8696461Abstract: A method of matching a player of a multi-player game with a remote participant includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, using an identity of the observer to find one or more candidates to play as the remote participant of the multi-player game, and when selecting the remote participant, choosing a candidate from the one or more candidates above a non-candidate if the candidate satisfies a matching criteria.Type: GrantFiled: June 1, 2011Date of Patent: April 15, 2014Assignee: Microsoft CorporationInventors: Relja Markovic, Stephen Latta, Kevin Geisner, A. Dylan Vance, Brian Scott Murphy, Matt Coohill
-
Patent number: 8578302Abstract: Systems, methods and computer readable media are disclosed for a gesture recognizer system architecture. A recognizer engine is provided, which receives user motion data and provides that data to a plurality of filters. A filter corresponds to a gesture, that may then be tuned by an application receiving information from the gesture recognizer so that the specific parameters of the gesture—such as an arm acceleration for a throwing gesture—may be set on a per-application level, or multiple times within a single application. Each filter may output to an application using it a confidence level that the corresponding gesture occurred, as well as further details about the user motion data.Type: GrantFiled: June 6, 2011Date of Patent: November 5, 2013Assignee: Microsoft CorporationInventors: Relja Markovic, Gregory N. Snook, Justin McBride
-
Patent number: 8487938Abstract: Systems, methods and computer readable media are disclosed for grouping complementary sets of standard gestures into gesture libraries. The gestures may be complementary in that they are frequently used together in a context or in that their parameters are interrelated. Where a parameter of a gesture is set with a first value, all other parameters of the gesture and of other gestures in the gesture package that depend on the first value may be set with their own value which is determined using the first value.Type: GrantFiled: February 23, 2009Date of Patent: July 16, 2013Assignee: Microsoft CorporationInventors: Stephen G. Latta, Kudo Tsunoda, Kevin Geisner, Relja Markovic, Darren Alexander Bennett
-
Publication number: 20130177296Abstract: A system and method for efficiently managing life experiences captured by one or more sensors (e.g., video or still camera, image sensors including RGB sensors and depth sensors). A “life recorder” is a recording device that continuously captures life experiences, including unanticipated life experiences, in image, video, and/or audio recordings. In some embodiments, video and/or audio recordings captured by a life recorder are automatically analyzed, tagged with a set of one or more metadata, indexed, and stored for future use. By tagging and indexing life recordings, a life recorder may search for and acquire life recordings generated by itself or another life recorder, thereby allowing life experiences to be shared minutes or even years later.Type: ApplicationFiled: November 29, 2012Publication date: July 11, 2013Inventors: Kevin A. Geisner, Relja Markovic, Stephen G. Latta, Daniel McCulloch
-
Patent number: 8465108Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.Type: GrantFiled: September 5, 2012Date of Patent: June 18, 2013Assignee: Microsoft CorporationInventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, Christopher Vuchetich, Darren A Bennett, Brian S Murphy, Shawn C Wright
-
Publication number: 20130135180Abstract: Various embodiments are provided for a shared collaboration system and related methods for enabling an active user to interact with one or more additional users and with collaboration items. In one embodiment a head-mounted display device is operatively connected to a computing device that includes a collaboration engine program. The program receives observation information of a physical space from the head-mounted display device along with a collaboration item. The program visually augments an appearance of the physical space as seen through the head-mounted display device to include an active user collaboration item representation of the collaboration item. The program populates the active user collaboration item representation with additional user collaboration item input from an additional user.Type: ApplicationFiled: November 30, 2011Publication date: May 30, 2013Inventors: Daniel McCulloch, Stephen Latta, Darren Bennett, Ryan Hastings, Jason Scott, Relja Markovic, Kevin Geisner, Jonathan Steed
-
Patent number: 8451278Abstract: It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like.Type: GrantFiled: August 3, 2012Date of Patent: May 28, 2013Assignee: Microsoft CorporationInventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook, Kudo Tsunoda, Darren Alexander Bennett
-
Patent number: 8448094Abstract: Systems and methods for mapping natural input devices to legacy system inputs are disclosed. One example system may include a computing device having an algorithmic preprocessing module configured to receive input data containing a natural user input and to identify the natural user input in the input data. The computing device may further include a gesture module coupled to the algorithmic preprocessing module, the gesture module being configured to associate the natural user input to a gesture in a gesture library. The computing device may also include a mapping module to map the gesture to a legacy controller input, and to send the legacy controller input to a legacy system in response to the natural user input.Type: GrantFiled: March 25, 2009Date of Patent: May 21, 2013Assignee: Microsoft CorporationInventors: Alex Kipman, R. Stephen Polzin, Kudo Tsunoda, Darren Bennett, Stephen Latta, Mark Finocchio, Gregory G. Snook, Relja Markovic
-
Publication number: 20130100119Abstract: Digitizing objects in a picture is discussed herein. A user presents the object to a camera, which captures the image comprising color and depth data for the front and back of the object. The object is recognized and digitized using color and depth data of the image. The user's client queries a server managing images uploaded by other users for virtual renditions of the object, as recognized in the other images. The virtual renditions from the other images are merged with the digitized version of the object in the image captured by the user to create a composite rendition of the object.Type: ApplicationFiled: October 25, 2011Publication date: April 25, 2013Applicant: MICROSOFT CORPORATIONInventors: JEFFREY JESUS EVERTT, JUSTIN AVRAM CLARK, CHRISTOPHER HARLEY WILLOUGHBY, JOEL DEAGUERO, RELJA MARKOVIC
-
Patent number: 8385596Abstract: A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured aiming vector control, and a virtual weapon is aimed in proportion to the gestured aiming vector control.Type: GrantFiled: December 21, 2010Date of Patent: February 26, 2013Assignee: Microsoft CorporationInventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic, Kudo Tsunoda, Greg Snook, Christopher H. Willoughby, Peter Sarrett, Daniel Lee Osborn
-
Publication number: 20130044130Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.Type: ApplicationFiled: January 30, 2012Publication date: February 21, 2013Inventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
-
Publication number: 20130022235Abstract: Interactive secret sharing includes receiving video data from a source and interpreting the video data to track an observed path of a device. In addition, position information is received from the device, and the position information is interpreted to track a self-reported path of the device. If the observed path is within a threshold tolerance of the self-reported path, access is provided to a restricted resource.Type: ApplicationFiled: July 22, 2011Publication date: January 24, 2013Applicant: MICROSOFT CORPORATIONInventors: Bradley Robert Pettit, Eric Soldan, Relja Markovic
-
Publication number: 20130013093Abstract: One or more physical characteristics of each of multiple users are detected. These physical characteristics of a user can include physical attributes of the user (e.g., the user's height, length of the user's legs) and/or physical skills of the user (e.g., how high the user can jump). Based on these detected one or more physical characteristics of the users, two or more of the multiple users to share an online experience (e.g., play a multi-player game) are identified.Type: ApplicationFiled: September 14, 2012Publication date: January 10, 2013Applicant: Microsoft CorporationInventors: Brian Scott Murphy, Stephen G. Latta, Darren Alexander Bennett, Pedro Perez, Shawn C. Wright, Relja Markovic, Joel B. Deaguero, Christopher H. Willoughby, Ryan Lucas Hastings, Kevin Geisner
-
Publication number: 20130007013Abstract: Various embodiments are disclosed that relate to negatively matching users over a network. For example, one disclosed embodiment provides a method including storing a plurality of user profiles corresponding to a plurality of users, each user profile in the plurality of user profiles including one or more user attributes, and receiving a request from a user for a list of one or more suggested negatively matched other users. In response to the request, the method further includes ranking each of a plurality of other users based on a magnitude of a difference between one or more user attributes of the user and corresponding one or more user attributes of the other user, and sending a list of one or more negatively matched users to the exclusion of more positively matched users based on the ranking.Type: ApplicationFiled: June 30, 2011Publication date: January 3, 2013Applicant: MICROSOFT CORPORATIONInventors: Kevin Geisner, Relja Markovic, Stephen Latta
-
Publication number: 20130002813Abstract: Techniques are provided for viewing windows for video streams. A video stream from a video capture device is accessed. Data that describes movement or position of a person is accessed. A viewing window is placed in the video stream based on the data that describes movement or position of the person. The viewing window is provided to a display device in accordance with the placement of the viewing window in the video stream. Motion sensors can detect motion of the person carrying the video capture device in order to dampen the motion such that the video on the remote display does not suffer from motion artifacts. Sensors can also track the eye gaze of either the person carrying the mobile video capture device or the remote display device to enable control of the spatial region of the video stream shown at the display device.Type: ApplicationFiled: June 29, 2011Publication date: January 3, 2013Inventors: Benjamin I. Vaught, Alex Aben-Athar Kipman, Michael J. Scavezze, Arthur C. Tomlin, Relja Markovic, Darren Bennett, Stephen G. Latta
-
Publication number: 20120326976Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.Type: ApplicationFiled: September 5, 2012Publication date: December 27, 2012Applicant: MICROSOFT CORPORATIONInventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, Christopher Vuchetich, Darren A. Bennett, Brian S. Murphy, Shawn C. Wright
-
Patent number: 8334842Abstract: Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and evaluates the person's intent to engage with the application. Factors such as location, stance, movement and voice data can be evaluated. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. Voice data can include volume as well as words which are detected by speech recognition.Type: GrantFiled: January 15, 2010Date of Patent: December 18, 2012Assignee: Microsoft CorporationInventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, Jonathan T Steed, Darren A Bennett, Amos D Vance