Patents by Inventor Dan Osborn
Dan Osborn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20160379409Abstract: A method is disclosed that may include, in a creating phase: receiving an instruction to generate a virtual place-located anchor at a virtual location that is world-locked; receiving data items from a target data source; linking a subset of the data items to the virtual place-located anchor; and receiving a permission from a first user specifying a condition under which a second user may view one or more holograms of the subset of data items. In a viewing phase, first display data may be transmitted to cause a first display device to display the holograms to the first user at the virtual place-located anchor; and if the condition is satisfied, second display data may be transmitted to cause a second display device to display the holograms to the second user at the virtual place-located anchor.Type: ApplicationFiled: June 24, 2015Publication date: December 29, 2016Inventors: Anatolie Gavriliuc, Dan Osborn, Stephen Heijster, Hongwu Huai
-
Patent number: 9530426Abstract: A conferencing system includes a near-eye display device that displays video received from a remote communication device of a communication partner. An audio stream is transmitted to the remote communication device. The audio stream includes real-world sounds produced by one or more real-world audio sources captured by a spatially-diverse microphone array and virtual sounds produced by one or more virtual audio sources. A relative volume of background sounds in the audio stream is selectively reduced based, at least in part, on real-world positioning of corresponding audio sources, including real-world and/or virtualized audio sources.Type: GrantFiled: June 24, 2015Date of Patent: December 27, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Shawn Crispin Wright, Dan Osborn, Joe Thompson, Hongwu Huai, Forest Woodcroft Gouin, Megan Saunders
-
Publication number: 20160371886Abstract: A method for operating a computing device is described herein. The method includes determining a user's gaze direction based on a gaze input, determining an intersection between the user's gaze direction and an identified environmental surface in a 3-dimensional environment, and generating a drawing surface based on the intersection within a user interface on a display.Type: ApplicationFiled: June 22, 2015Publication date: December 22, 2016Inventors: Joe Thompson, Dan Osborn, Tarek Hefny, Stephen G. Latta, Forest Woodcroft Gouin, James Nakashima, Megan Saunders, Anatolie Gavriliuc, Alberto E. Cerriteno, Shawn Crispin Wright
-
Publication number: 20160371885Abstract: Examples are disclosed herein that relate to sharing of depth-referenced markup in image data. One example provides, on a computing device, a method comprising receiving image data of a real world scene and depth data of the real world scene. The method further includes displaying the image data, receiving an input of a markup to the image data, and associating the markup with a three-dimensional location in the real world scene based on the depth data. The method further comprises sending the markup and the three-dimensional location associated with the markup to another device.Type: ApplicationFiled: June 22, 2015Publication date: December 22, 2016Inventors: Anatolie Gavriliuc, Dan Osborn, Steve Heijster, Hongwu Huai, Albert Robles, Nicolas Gauvin
-
Publication number: 20160363767Abstract: A method for displaying holograms may include displaying an initial hologram via a display device comprising an at least partially see-through display, the initial hologram located on a virtual surface at an initial virtual location. Subsequently, an instruction is received to display a subsequent hologram on the virtual surface at a subsequent virtual location. Collision detection is performed to determine that the subsequent hologram would collide with the initial hologram. In response, the subsequent hologram is displayed at an adjusted virtual location that is closer to the display device than the initial virtual location of the initial hologram.Type: ApplicationFiled: June 10, 2015Publication date: December 15, 2016Inventors: Dan Osborn, Anatolie Gavriliuc, Stephen Heijster
-
Patent number: 9520002Abstract: A method is disclosed that may include, in a creating phase: receiving an instruction to generate a virtual place-located anchor at a virtual location that is world-locked; receiving data items from a target data source; linking a subset of the data items to the virtual place-located anchor; and receiving a permission from a first user specifying a condition under which a second user may view one or more holograms of the subset of data items. In a viewing phase, first display data may be transmitted to cause a first display device to display the holograms to the first user at the virtual place-located anchor; and if the condition is satisfied, second display data may be transmitted to cause a second display device to display the holograms to the second user at the virtual place-located anchor.Type: GrantFiled: June 24, 2015Date of Patent: December 13, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Anatolie Gavriliuc, Dan Osborn, Stephen Heijster, Hongwu Huai
-
Publication number: 20160357252Abstract: One example provides, on a computing device comprising a display, a method of initiating and conducting voice communication with a contact. The method comprises displaying a user interface on the display, receiving a user input of a position signal for the user interface, and determining that the position signal satisfies a selection condition for a contact based on a location of the position signal on the user interface and a position of a proxy view of the contact on the user interface. The method further comprises, in response to determining that the position signal satisfies the selection condition, selecting the contact for communication, receiving voice input, and responsive to receiving the voice input while the contact is selected for communication, opening a voice communication channel with the contact and sending the voice input to the contact via the voice communication channel.Type: ApplicationFiled: June 4, 2015Publication date: December 8, 2016Inventors: Anatolie Gavriliuc, Dan Osborn, Stephen Heijster
-
Publication number: 20160210784Abstract: A wearable, head-mounted display system includes a near-eye display to display an augmented reality object perceivable at an apparent real world depth and an apparent real world location by a wearer of the head-mounted display system, and a controller to adjust the apparent real world location of the augmented reality object as a function of a field of view (FOV) of the wearer. The function is based on a bounding region of the augmented reality object and one or more overlap parameters between the bounding region of the augmented reality object and the FOV of the wearer.Type: ApplicationFiled: June 19, 2015Publication date: July 21, 2016Inventors: Scott Ramsby, Joe Thompson, Dan Osborn, Shawn Crispin Wright, Brian Kramp, Megan Saunders, Forest Woodcroft Gouin
-
Publication number: 20160209917Abstract: A method to provide visual feedback for gazed-based user-interface navigation includes presenting, on a display, a first image representing a digital object available for user interaction, recognizing a user gaze axis, and computing a point of intersection of the user gaze axis through the first image. An offset distance between the point of intersection and a reference position of the first image then recognized, and a second image is presented on the display. The second image is presented displaced from the point of intersection by an amount dependent on the offset distance.Type: ApplicationFiled: May 20, 2015Publication date: July 21, 2016Inventors: Alberto Cerriteno, Aaron Chandler Jeromin, Megan Saunders, Dan Osborn, Adam Christopher Heaney, Forest Woodcroft Gouin, James Nakashima, Patrick Ryan
-
Publication number: 20150254905Abstract: An example wearable display system includes a controller, a left display to display a left-eye augmented reality image with a left-eye display size at left-eye display coordinates, and a right display to display a right-eye augmented reality image with a right-eye display size at right-eye display coordinates, the left-eye and right-eye augmented reality images collectively forming an augmented reality object perceivable at an apparent real world depth by a wearer of the display system. The controller sets the left-eye display coordinates relative to the right-eye display coordinates as a function of the apparent real world depth of the augmented reality object. The function maintains an aspect of the left-eye and right-eye display sizes throughout a non-scaling range of apparent real world depths of the augmented reality object, and the function scales the left-eye and right-eye display sizes with changing apparent real world depth outside the non-scaling range.Type: ApplicationFiled: May 20, 2015Publication date: September 10, 2015Inventors: Scott Ramsby, Dan Osborn, Shawn Wright, Anatolie Gavriliuc, Forest Woodcroft Gouin, Megan Saunders, Jesse Rapczak, Stephen Latta, Adam G. Poulos, Daniel McCulloch, Wei Zhang
-
Publication number: 20150190716Abstract: Systems, methods, and computer media for generating an avatar reflecting a player's current appearance. Data describing the player's current appearance is received. The data includes a visible spectrum image of the player, a depth image including both the player and a current background, and skeletal data for the player. The skeletal data indicates an outline of the player's skeleton. Based at least in part on the received data, one or more of the following are captured: a facial appearance of the player; a hair appearance of the player; a clothing appearance of the player; and a skin color of the player. A 3D avatar resembling the player is generated by combining the captured facial appearance, hair appearance, clothing appearance, and/or skin color with predetermined avatar features.Type: ApplicationFiled: March 18, 2015Publication date: July 9, 2015Inventors: JEFFREY JESUS EVERTT, JUSTIN AVRAM CLARK, ZACHARY TYLER MIDDLETON, MATTHEW J. PULS, MARK THOMAS MIHELICH, DAN OSBORN, ANDREW R. CAMPBELL, CHARLES EVERETT MARTIN, DAVID M. HILL
-
Patent number: 9013489Abstract: Systems, methods, and computer media for generating an avatar reflecting a player's current appearance. Data describing the player's current appearance is received. The data includes a visible spectrum image of the player, a depth image including both the player and a current background, and skeletal data for the player. The skeletal data indicates an outline of the player's skeleton. Based at least in part on the received data, one or more of the following are captured: a facial appearance of the player; a hair appearance of the player; a clothing appearance of the player; and a skin color of the player. A 3D avatar resembling the player is generated by combining the captured facial appearance, hair appearance, clothing appearance, and/or skin color with predetermined avatar features.Type: GrantFiled: November 16, 2011Date of Patent: April 21, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Jeffrey Jesus Evertt, Justin Avram Clark, Zachary Tyler Middleton, Matthew J Puls, Mark Thomas Mihelich, Dan Osborn, Andrew R Campbell, Charles Everett Martin, David M Hill
-
Patent number: 8957858Abstract: Systems and methods for multi-platform motion interactivity, is provided. The system includes a motion-sensing subsystem, a display subsystem including a display, a logic subsystem, and a data-holding subsystem containing instructions executable by the logic subsystem. The system configured to display a displayed scene on the display; receive a dynamically-changing motion input from the motion-sensing subsystem that is generated in response to movement of a tracked object; generate, in real time, a dynamically-changing 3D spatial model of the tracked object based on the motion input; control, based on the movement of the tracked object and using the 3D spatial model, motion within the displayed scene. The system further configured to receive, from a secondary computing system, a secondary input; and control the displayed scene in response to the secondary input to visually represent interaction between the motion input and the secondary input.Type: GrantFiled: May 27, 2011Date of Patent: February 17, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Dan Osborn, Christopher Willoughby, Brian Mount, Vaibhav Goel, Tim Psiaki, Shawn C. Wright, Christopher Vuchetich
-
Publication number: 20120309520Abstract: Systems, methods, and computer media for generating an avatar reflecting a player's current appearance. Data describing the player's current appearance is received. The data includes a visible spectrum image of the player, a depth image including both the player and a current background, and skeletal data for the player. The skeletal data indicates an outline of the player's skeleton. Based at least in part on the received data, one or more of the following are captured: a facial appearance of the player; a hair appearance of the player; a clothing appearance of the player; and a skin color of the player. A 3D avatar resembling the player is generated by combining the captured facial appearance, hair appearance, clothing appearance, and/or skin color with predetermined avatar features.Type: ApplicationFiled: November 16, 2011Publication date: December 6, 2012Applicant: MICROSOFT CORPORATIONInventors: JEFFREY JESUS EVERTT, JUSTIN AVRAM CLARK, ZACHARY TYLER MIDDLETON, MATTHEW J. PULS, MARK THOMAS MIHELICH, DAN OSBORN, ANDREW R. CAMPBELL, CHARLES EVERETT MARTIN, DAVID M. HILL
-
Publication number: 20120299827Abstract: Systems and methods for multi-platform motion interactivity, is provided. The system includes a motion-sensing subsystem, a display subsystem including a display, a logic subsystem, and a data-holding subsystem containing instructions executable by the logic subsystem. The system configured to display a displayed scene on the display; receive a dynamically-changing motion input from the motion-sensing subsystem that is generated in response to movement of a tracked object; generate, in real time, a dynamically-changing 3D spatial model of the tracked object based on the motion input; control, based on the movement of the tracked object and using the 3D spatial model, motion within the displayed scene. The system further configured to receive, from a secondary computing system, a secondary input; and control the displayed scene in response to the secondary input to visually represent interaction between the motion input and the secondary input.Type: ApplicationFiled: May 27, 2011Publication date: November 29, 2012Applicant: MICROSOFT CORPORATIONInventors: Dan Osborn, Christopher Willoughby, Brian Mount, Vaibhav Goel, Tim Psiaki, Shawn C. Wright, Christopher Vuchetich
-
Publication number: 20110221755Abstract: A camera that can sense motion of a user is connected to a computing system (e.g., video game apparatus or other type of computer). The computing system determines an action corresponding to the sensed motion of the user and determines a magnitude of the sensed motion of the user. The computing system creates and displays an animation of an object (e.g., an avatar in a video game) performing the action in a manner that is amplified in comparison to the sensed motion by a factor that is proportional to the determined magnitude. The computing system also creates and outputs audio/visual feedback in proportion to a magnitude of the sensed motion of the user.Type: ApplicationFiled: March 12, 2010Publication date: September 15, 2011Inventors: Kevin Geisner, Relja Markovic, Stephen G. Latta, Brian James Mount, Zachary T. Middleton, Joel Deaguero, Christopher Willoughby, Dan Osborn, Darren Bennett, Gregory N. Snook