Patents by Inventor Jay Kapur

Jay Kapur has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11099637
    Abstract: Embodiments related to dynamically adjusting a user interface based upon depth information are disclosed. For example, one disclosed embodiment provides a method including receiving depth information of a physical space from a depth camera, locating a user within the physical space from the depth information, determining a distance between the user and a display device from the depth information, and adjusting one or more features of a user interface displayed on the display device based on the distance.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: August 24, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mark Schwesinger, Oscar Murillo, Emily Yang, Richard Bailey, Jay Kapur
  • Publication number: 20190377408
    Abstract: Embodiments related to dynamically adjusting a user interface based upon depth information are disclosed. For example, one disclosed embodiment provides a method including receiving depth information of a physical space from a depth camera, locating a user within the physical space from the depth information, determining a distance between the user and a display device from the depth information, and adjusting one or more features of a user interface displayed on the display device based on the distance.
    Type: Application
    Filed: August 23, 2019
    Publication date: December 12, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Mark Schwesinger, Oscar Murillo, Emily Yang, Richard Bailey, Jay Kapur
  • Patent number: 10394314
    Abstract: Embodiments related to dynamically adjusting a user interface based upon depth information are disclosed. For example, one disclosed embodiment provides a method including receiving depth information of a physical space from a depth camera, locating a user within the physical space from the depth information, determining a distance between the user and a display device from the depth information, and adjusting one or more features of a user interface displayed on the display device based on the distance.
    Type: Grant
    Filed: August 3, 2016
    Date of Patent: August 27, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mark Schwesinger, Oscar Murillo, Emily Yang, Richard Bailey, Jay Kapur
  • Patent number: 9971491
    Abstract: A method to decode natural user input from a human subject. The method includes detection of a gesture and concurrent grip state of the subject. If the grip state is closed during the gesture, then a user-interface (UI) canvas of the computer system is transformed based on the gesture. If the grip state is open during the gesture, then a UI object arranged on the UI canvas is activated based on the gesture.
    Type: Grant
    Filed: January 9, 2014
    Date of Patent: May 15, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mark Schwesinger, Emily Yang, Jay Kapur, Sergio Paolantonio, Christian Klein, Oscar Murillo
  • Patent number: 9696427
    Abstract: Embodiments for a depth sensing camera with a wide field of view are disclosed. In one example, a depth sensing camera comprises an illumination light projection subsystem, an image detection subsystem configured to acquire image data having a wide angle field of view, a logic subsystem configured to execute instructions, and a data-holding subsystem comprising stored instructions executable by the logic subsystem to control projection of illumination light and to determine depth values from image data acquired via the image sensor. The image detection subsystem comprises an image sensor and one or more lenses.
    Type: Grant
    Filed: August 14, 2012
    Date of Patent: July 4, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Andrew Wilson, Hrvoje Benko, Jay Kapur, Stephen Edward Hodges
  • Patent number: 9646340
    Abstract: A method to help a user visualize how a wearable article will look on the user's body. Enacted on a computing system, the method includes receiving an image of the user's body from an image-capture component. Based on the image, a posable, three-dimensional, virtual avatar is constructed to substantially resemble the user. In this example method, data is obtained that identifies the wearable article as being selected for the user. This data includes a plurality of metrics that at least partly define the wearable article. Then, a virtualized form of the wearable article is attached to the avatar, which is provided to a display component for the user to review.
    Type: Grant
    Filed: August 2, 2012
    Date of Patent: May 9, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jay Kapur, Sheridan Jones, Kudo Tsunoda
  • Publication number: 20160342203
    Abstract: Embodiments related to dynamically adjusting a user interface based upon depth information are disclosed. For example, one disclosed embodiment provides a method including receiving depth information of a physical space from a depth camera, locating a user within the physical space from the depth information, determining a distance between the user and a display device from the depth information, and adjusting one or more features of a user interface displayed on the display device based on the distance.
    Type: Application
    Filed: August 3, 2016
    Publication date: November 24, 2016
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Mark Schwesinger, Oscar Murillo, Emily Yang, Richard Bailey, Jay Kapur
  • Patent number: 9423939
    Abstract: Embodiments related to dynamically adjusting a user interface based upon depth information are disclosed. For example, one disclosed embodiment provides a method including receiving depth information of a physical space from a depth camera, locating a user within the physical space from the depth information, determining a distance between the user and a display device from the depth information, and adjusting one or more features of a user interface displayed on the display device based on the distance.
    Type: Grant
    Filed: November 12, 2012
    Date of Patent: August 23, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mark Schwesinger, Oscar Murillo, Emily Yang, Richard Bailey, Jay Kapur
  • Patent number: 9383894
    Abstract: Embodiments are disclosed that relate to providing feedback for a level of completion of a user gesture via a cursor displayed on a user interface. One disclosed embodiment provides a method comprising displaying a cursor having a visual property and moving a screen-space position of the cursor responsive to the user gesture. The method further comprises changing the visual property of the cursor in proportion to a level of completion of the user gesture. In this way, the level of completion of the user gesture may be presented to the user in a location to which the attention of the user is directed during performance of the gesture.
    Type: Grant
    Filed: January 8, 2014
    Date of Patent: July 5, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mark Schwesinger, Emily Yang, Jay Kapur, Christian Klein, Oscar Murillo, Sergio Paolantonio
  • Publication number: 20160092088
    Abstract: Techniques and constructs can provide user communication with reduced bandwidth load. An indication of an item of user-linked content can be received. A confirmation view can be displayed and a confirmation received. An available one of a predetermined number of content slots can be determined. An association between the item of user-linked content and the available content slot can then be recorded in a computer storage medium. An example apparatus can include an insertion module responsive to a user-operable input device to insert a user-linked content item corresponding to a visual representation into one of the content slots. An example apparatus can include a summary module configured to present a summary visual representation of a plurality of items of user-linked content via the display. At least one of the items can include a gameplay video or an in-game collateral item.
    Type: Application
    Filed: September 30, 2014
    Publication date: March 31, 2016
    Inventors: John Doyle, Jay Kapur, Hansen Liou, Jorge Lopez De Luna, Randy Santossio, Steven Trombetta
  • Publication number: 20150193124
    Abstract: Embodiments are disclosed that relate to providing feedback for a level of completion of a user gesture via a cursor displayed on a user interface. One disclosed embodiment provides a method comprising displaying a cursor having a visual property and moving a screen-space position of the cursor responsive to the user gesture. The method further comprises changing the visual property of the cursor in proportion to a level of completion of the user gesture. In this way, the level of completion of the user gesture may be presented to the user in a location to which the attention of the user is directed during performance of the gesture.
    Type: Application
    Filed: January 8, 2014
    Publication date: July 9, 2015
    Applicant: Microsoft Corporation
    Inventors: Mark Schwesinger, Emily Yang, Jay Kapur, Christian Klein, Oscar Murillo, Sergio Paolantonio
  • Publication number: 20150193107
    Abstract: A method to decode natural user input from a human subject. The method includes detection of a gesture and concurrent grip state of the subject. If the grip state is closed during the gesture, then a user-interface (UI) canvas of the computer system is transformed based on the gesture. If the grip state is open during the gesture, then a UI object arranged on the UI canvas is activated based on the gesture.
    Type: Application
    Filed: January 9, 2014
    Publication date: July 9, 2015
    Applicant: MICROSOFT CORPORATION
    Inventors: Mark Schwesinger, Emily Yang, Jay Kapur, Sergio Paolantonio, Christian Klein, Oscar Murillo
  • Publication number: 20150123901
    Abstract: Embodiments are disclosed that relate to controlling a computing device based upon gesture input. In one embodiment, orientation information of the human subject is received, wherein the orientation information includes information regarding an orientation of a first body part and an orientation of a second body part. A gesture performed by the first body part is identified based on the orientation information, and an orientation of the second body part is identified based on the orientation information. A mapping of the gesture to an action performed by the computing device is determined based on the orientation of the second body part.
    Type: Application
    Filed: November 4, 2013
    Publication date: May 7, 2015
    Applicant: Microsoft Corporation
    Inventors: Mark Schwesinger, Emily Yang, Jay Kapur, Sergio Paolantonio, Christian Klein
  • Publication number: 20150123890
    Abstract: Embodiments are disclosed which relate to two hand natural user input. For example, one disclosed embodiment provides a method comprising receiving first hand tracking data regarding a first hand of a user and second hand tracking data regarding a second hand of the user from a sensor system. The first hand tracking data and the second hand tracking data temporally overlap. A gesture is then detected based on the first hand tracking data and the second hand tracking data, and one or more aspects of the computing device are controlled based on the gesture detected.
    Type: Application
    Filed: November 4, 2013
    Publication date: May 7, 2015
    Applicant: Microsoft Corporation
    Inventors: Jay Kapur, Mark Schwesinger, Emily Yang, Sergio Paolantonio, Christian Klein
  • Publication number: 20150097766
    Abstract: An NUI system for mediating input from a computer-system user. The NUI system includes a logic machine and an instruction storage machine. The instruction-storage machine holds instructions that cause the logic machine to receive data tracking a change in conformation of the user including at least a hand trajectory of the user. If the data show increasing separation between two hands of the user, the NUI system causes a foreground process of the computer system to be displayed in greater detail on the display. If the data show decreasing separation between the two hands of the user, the NUI system causes the foreground process to be represented in lesser detail.
    Type: Application
    Filed: October 4, 2013
    Publication date: April 9, 2015
    Applicant: Microsoft Corporation
    Inventors: Jay Kapur, Mark Schwesinger, Emily Yang, Sergio Paolantonio, Federico Schliemann, Christian Klein
  • Patent number: 9001118
    Abstract: A method for constructing an avatar of a human subject includes acquiring a depth map of the subject, obtaining a virtual skeleton of the subject based on the depth map, and harvesting from the virtual skeleton a set of characteristic metrics. Such metrics correspond to distances between predetermined points of the virtual skeleton. In this example method, the characteristic metrics are provided as input to an algorithm trained using machine learning. The algorithm may be trained using a human model in a range of poses, and a range of human models in a single pose, to output a virtual body mesh as a function of the characteristic metrics. The method also includes constructing a virtual head mesh distinct from the virtual body mesh, with facial features resembling those of the subject, and connecting the virtual body mesh to the virtual head mesh.
    Type: Grant
    Filed: August 14, 2012
    Date of Patent: April 7, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: David Molyneaux, Xin Tong, Zicheng Liu, Eric Chang, Fan Yang, Jay Kapur, Emily Yang, Yang Liu, Hsiang-Tao Wu
  • Patent number: 8913809
    Abstract: Embodiments related to monitoring physical body changes over time are disclosed. One embodiment provides a computing device configured to receive a depth image representing an observed scene comprising a user and detect a representation of the user in the depth image. The computing device is further configured to determine an adjusted body model based on a comparison between an initial body model and the representation of the user, and output to a display device a representation of the adjusted body model.
    Type: Grant
    Filed: June 13, 2012
    Date of Patent: December 16, 2014
    Assignee: Microsoft Corporation
    Inventors: Jay Kapur, Todd Ferkingstad
  • Publication number: 20140132499
    Abstract: Embodiments related to dynamically adjusting a user interface based upon depth information are disclosed. For example, one disclosed embodiment provides a method including receiving depth information of a physical space from a depth camera, locating a user within the physical space from the depth information, determining a distance between the user and a display device from the depth information, and adjusting one or more features of a user interface displayed on the display device based on the distance.
    Type: Application
    Filed: November 12, 2012
    Publication date: May 15, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Mark Schwesinger, Oscar Murillo, Emily Yang, Richard Bailey, Jay Kapur
  • Publication number: 20140122086
    Abstract: Embodiments related to the use of depth imaging to augment speech recognition are disclosed. For example, one disclosed embodiment provides, on a computing device, a method including receiving depth information of a physical space from a depth camera, receiving audio information from one or more microphones, identifying a set of one or more possible spoken words from the audio information, determining a speech input for the computing device based upon comparing the set of one or more possible spoken words from the audio information and the depth information, and taking an action on the computing device based upon the speech input determined.
    Type: Application
    Filed: October 26, 2012
    Publication date: May 1, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Jay Kapur, Ivan Tashev, Mike Seltzer, Stephen Edward Hodges
  • Publication number: 20140049609
    Abstract: Embodiments for a depth sensing camera with a wide field of view are disclosed. In one example, a depth sensing camera comprises an illumination light projection subsystem, an image detection subsystem configured to acquire image data having a wide angle field of view, a logic subsystem configured to execute instructions, and a data-holding subsystem comprising stored instructions executable by the logic subsystem to control projection of illumination light and to determine depth values from image data acquired via the image sensor. The image detection subsystem comprises an image sensor and one or more lenses.
    Type: Application
    Filed: August 14, 2012
    Publication date: February 20, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Andrew Wilson, Hrvoje Benko, Jay Kapur, Stephen Edward Hodges