Patents by Inventor John Clavin
John Clavin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20150130689Abstract: Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.Type: ApplicationFiled: January 22, 2015Publication date: May 14, 2015Inventors: Ben Sugden, John Clavin, Ben Vaught, Stephen Latta, Kathryn Stone Perez, Daniel McCulloch, Jason Scott, Wei Zhang, Darren Bennett, Ryan Hastings, Arthur Tomlin, Kevin Geisner
-
Publication number: 20150100926Abstract: Disclosed herein are techniques for scaling and translating gestures such that the applicable gestures for control may vary depending on the user's distance from a gesture-based system. The techniques for scaling and translation may take the varying distances from which a user interacts with components of the gesture-based system, such as a computing environment or capture device, into consideration with respect to defining and/or recognizing gestures. In an example embodiment, the physical space is divided into virtual zones of interaction, and the system may scale or translate a gesture based on the zones. A set of gesture data may be associated with each virtual zone such that gestures appropriate for controlling aspects of the gesture-based system may vary throughout the physical space.Type: ApplicationFiled: August 18, 2014Publication date: April 9, 2015Inventors: Otto G. Berkes, Steven Bathiche, John Clavin, Ian LeGrow, Joseph Reginald Scott Molnar
-
Patent number: 8976986Abstract: Volume adjustment based on listener position is disclosed. A position of one or more speakers is identified, and a position of a listener is tracked. For each of the one or more speakers, a changing distance between that speaker and the listener is assessed. A volume of that speaker is automatically adjusted in real-time based on a current distance between that speaker and the listener.Type: GrantFiled: September 21, 2009Date of Patent: March 10, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Drew Angeloff, John Clavin, Robert Walker
-
Patent number: 8963805Abstract: Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.Type: GrantFiled: January 27, 2012Date of Patent: February 24, 2015Assignee: Microsoft CorporationInventors: Ben Sugden, John Clavin, Ben Vaught, Stephen Latta, Kathryn Stone Perez, Daniel McCulloch, Jason Scott, Wei Zhang, Darren Bennett, Ryan Hastings, Arthur Tomlin, Kevin Geisner
-
Patent number: 8843857Abstract: Disclosed herein are techniques for scaling and translating gestures such that the applicable gestures for control may vary depending on the user's distance from a gesture-based system. The techniques for scaling and translation may take the varying distances from which a user interacts with components of the gesture-based system, such as a computing environment or capture device, into consideration with respect to defining and/or recognizing gestures. In an example embodiment, the physical space is divided into virtual zones of interaction, and the system may scale or translate a gesture based on the zones. A set of gesture data may be associated with each virtual zone such that gestures appropriate for controlling aspects of the gesture-based system may vary throughout the physical space.Type: GrantFiled: November 19, 2009Date of Patent: September 23, 2014Assignee: Microsoft CorporationInventors: Otto G. Berkes, Steven Bathiche, John Clavin, Ian LeGrow, Joseph Reginald Scott Molnar
-
Patent number: 8751524Abstract: An application sequence may be composed in response to a user query. The application sequence may be based upon user data accessed by the application concierge service. Each application within the personalized chain of applications may request and receive data from the application concierge service about the other applications within the chain.Type: GrantFiled: October 29, 2010Date of Patent: June 10, 2014Assignee: Microsoft CorporationInventors: Joseph Futty, Miller T. Abel, Eric P. Gilmore, Kamran Rajabi Zargahi, John Clavin, Viswanath Vadlamani
-
Publication number: 20140126066Abstract: A head-mounted display (HMD) provides an augmented view of advertisements to an HMD wearer. In some embodiments, when an advertisement is within an HMD wearer's field of view, the HMD may augment the HMD wearer's view of the advertisement to provide additional information and/or to personalize the advertisement to the HMD wearer. In other embodiments, when an advertisement is within an HMD wearer's field of view, the HMD may augment the HMD wearer's view of the advertisement to remove the advertisement from the HMD wearer's view or to replace the content of the advertisement with non-advertising content.Type: ApplicationFiled: January 9, 2014Publication date: May 8, 2014Inventors: JOHN CLAVIN, MEGAN LESLEY TEDESCO, DANIEL JOHN WIGDOR
-
Publication number: 20140129394Abstract: A virtual closet stores and presents virtual representations of physical items. The virtual representations may include virtual representations of items owned by a user, items the user would like to own, and/or memorabilia items for the user. The virtual closet may provide a number of functions. In some embodiments, the virtual closet may be used to facilitate selling items on online selling platforms by providing information (including virtual representations of the items) from the virtual closet to the online selling platforms to place the items for sale. The virtual closet may also be used in some embodiments to facilitate providing advertising targeted to the user based on information available to the virtual closet.Type: ApplicationFiled: January 15, 2014Publication date: May 8, 2014Applicant: MICROSOFT CORPORATIONInventors: THOMAS C. OLIVER, MEGAN LESLEY TEDESCO, JOHN CLAVIN, EYAL OFEK, DOUG BURGER
-
Patent number: 8670183Abstract: A head-mounted display (HMD) provides an augmented view of advertisements to an HMD wearer. In some embodiments, when an advertisement is within an HMD wearer's field of view, the HMD may augment the HMD wearer's view of the advertisement to provide additional information and/or to personalize the advertisement to the HMD wearer. In other embodiments, when an advertisement is within an HMD wearer's field of view, the HMD may augment the HMD wearer's view of the advertisement to remove the advertisement from the HMD wearer's view or to replace the content of the advertisement with non-advertising content.Type: GrantFiled: March 7, 2011Date of Patent: March 11, 2014Assignee: Microsoft CorporationInventors: John Clavin, Megan Lesley Tedesco, Daniel John Wigdor
-
Patent number: 8654152Abstract: A system and method are disclosed for selectively focusing on certain areas of interest within an imaged scene to gain more image detail within those areas. In general, the present system identifies areas of interest from received image data, which may for example be detected areas of movement within the scene. The system then focuses on those areas by providing more detail in the area of interest. This may be accomplished by a number of methods, including zooming in on the image, increasing pixel density of the image and increasing the amount of light incident on the object in the image.Type: GrantFiled: June 21, 2010Date of Patent: February 18, 2014Assignee: Microsoft CorporationInventors: Scott McEldowney, John A. Tardif, John Clavin, David Cohen, Giora Yahav
-
Patent number: 8645230Abstract: A virtual closet stores and presents virtual representations of physical items. The virtual representations may include virtual representations of items owned by a user, items the user would like to own, and memorabilia items for the user. The virtual closet may provide a number of functions. In some embodiments, the virtual closet may be used to facilitate selling items on online selling platforms by providing information (including virtual representations of the items) from the virtual closet to the online selling platforms to place the items for sale. The virtual closet may also be used in some embodiments to facilitate providing advertising targeted to the user based on information available to the virtual closet.Type: GrantFiled: March 18, 2011Date of Patent: February 4, 2014Assignee: Microsoft CorporationInventors: Thomas C. Oliver, Megan Lesley Tedesco, John Clavin, Eyal Ofek, Doug Burger
-
Publication number: 20130296682Abstract: Embodiments are disclosed that relate to the integration of pre-surgical images and surgical images. For example, one disclosed embodiment provides, on a computing system, a method including receiving a pre-surgical image of a patient, receiving a depth image of the patient during surgery, and comparing the depth image of the patient to the pre-surgical image of the patient. The method further comprises providing an output based upon a result of comparing the depth image of the patient to the pre-surgical image of the patient.Type: ApplicationFiled: May 4, 2012Publication date: November 7, 2013Applicant: MICROSOFT CORPORATIONInventors: John Clavin, Jaron Lanier
-
Publication number: 20130293468Abstract: A see-through, near-eye, mixed reality display device and system for collaboration amongst various users of other such devices and personal audio/visual devices of more limited capabilities. One or more wearers of a see through head mounted display apparatus define a collaboration environment. For the collaboration environment, a selection of collaboration data and the scope of the environment are determined. Virtual representations of the collaboration data in the field of view of the wearer, and other device users are rendered. Persons in the wearer's field of view to be included in collaboration environment and who are entitled to share information in the collaboration environment are defined by the wearer. If allowed, input from other users in the collaboration environment on the virtual object may be received and allowed to manipulate a change in the virtual object.Type: ApplicationFiled: May 4, 2012Publication date: November 7, 2013Inventors: Kathryn Stone Perez, John Clavin, Kevin A. Geisner, Stephen G. Latta, Brian J. Mount, Arthur C. Tomlin, Adam G. Poulos
-
Publication number: 20130293530Abstract: An augmented reality system that provides augmented product and environment information to a wearer of a see through head mounted display. The augmentation information may include advertising, inventory, pricing and other information about products a wearer may be interested in. Interest is determined from wearer actions and a wearer profile. The information may be used to incentivize purchases of real world products by a wearer, or allow the wearer to make better purchasing decisions. The augmentation information may enhance a wearer's shopping experience by allowing the wearer easy access to important product information while the wearer is shopping in a retail establishment. Through virtual rendering, a wearer may be provided with feedback on how an item would appear in a wearer environment, such as the wearer's home.Type: ApplicationFiled: May 4, 2012Publication date: November 7, 2013Inventors: Kathryn Stone Perez, John Clavin, Kevin A. Geisner, Stephen G. Latta, Brian J. Mount, Arthur C. Tomlin, Adam G. Poulos
-
Publication number: 20130293577Abstract: A see-through, near-eye, mixed reality display apparatus for providing translations of real world data for a user. A wearer's location and orientation with the apparatus is determined and input data for translation is selected using sensors of the apparatus. Input data can be audio or visual in nature, and selected by reference to the gaze of a wearer. The input data is translated for the user relative to user profile information bearing on accuracy of a translation and determining from the input data whether a linguistic translation, knowledge addition translation or context translation is useful.Type: ApplicationFiled: May 4, 2012Publication date: November 7, 2013Inventors: Kathryn Stone Perez, John Clavin, Kevin A. Geisner, Stephen G. Latta, Brian J. Mount, Arthur C. Tomlin, Adam G. Poulos
-
Publication number: 20130252216Abstract: Embodiments related to an interactive physical therapy experience are disclosed. One embodiment provides a computing device configured to receive, from an administrator client, an assigned exercise list comprising one or more assigned exercises to be performed by a user. The computing device is further configured to send, to a user client, one or more exercise modules, each of the exercise modules representing one of the assigned exercises. The computing device is further configured to receive prescription tracking data representing performance of the one or more assigned exercises by the user, and provide feedback to the administrator client based on the prescription tracking data.Type: ApplicationFiled: March 20, 2012Publication date: September 26, 2013Applicant: MICROSOFT CORPORATIONInventors: John Clavin, Jaron Lanier
-
Patent number: 8523667Abstract: In a motion capture system having a depth camera, access to an electronic media device such as personal computer or a game console with Internet connectivity is controlled. The age group of a person in a field of view of the camera can be determined based on metrics of a 3-D body model. The metrics can relate to, e.g., a relative size of a head of the body, a ratio of arm length to body height, a ratio of body height to head height, and/or a ratio of head width to shoulder width. The metrics are particularly indicative of age group. Based on the age group, a profile of the user is automatically updated with various parental control settings which control access to the electronic media device. Also, currently output content can be replaced by substitute content when a person in a lower age group enters the field of view.Type: GrantFiled: March 29, 2010Date of Patent: September 3, 2013Assignee: Microsoft CorporationInventors: John Clavin, Gaelle Vialle, Aaron Kornblum
-
Publication number: 20130194389Abstract: A method for assessing a attentiveness to visual stimuli received through a head-mounted display device. The method employs first and second detectors arranged in the head-mounted display device. An ocular state of the wearer of the head-mounted display device is detected with the first detector while the wearer is receiving a visual stimulus. With the second detector, the visual stimulus received by the wearer is detected. The ocular state is then correlated to the wearer's attentiveness to the visual stimulus.Type: ApplicationFiled: January 31, 2012Publication date: August 1, 2013Inventors: Ben Vaught, Ben Sugden, Stephen Latta, John Clavin
-
Publication number: 20130194164Abstract: Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.Type: ApplicationFiled: January 27, 2012Publication date: August 1, 2013Inventors: Ben Sugden, John Clavin, Ben Vaught, Stephen Latta, Kathryn Stone Perez, Daniel McCulloch, Jason Scott, Wei Zhang, Darren Bennett, Ryan Hastings, Arthur Tomlin, Kevin Geisner
-
Publication number: 20130154958Abstract: A controller for a content presentation and interaction system which includes a primary content presentation device. The controller includes a tactile control input and a touch screen control input. The tactile control input is responsive to the inputs of a first user and communicatively coupled to the content presentation device. The controller a plurality of tactile input mechanisms and provides a first set of the plurality of control inputs manipulating content. The controller includes a touch screen control input responsive to the inputs of the first user and communicatively coupled to the content presentation device. The second controller is proximate the first controller and provides a second set of the plurality of control inputs. The second set of control inputs includes alternative inputs for at least some of the controls and additional inputs not available using the tactile input mechanisms.Type: ApplicationFiled: December 20, 2011Publication date: June 20, 2013Applicant: MICROSOFT CORPORATIONInventors: John Clavin, Kenneth A. Lobb, Christopher M. Novak, Kevin Geisner, Christian Klein