Patents by Inventor Jeffrey Neil Margolis
Jeffrey Neil Margolis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20160335806Abstract: Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image by circuitry within the display.Type: ApplicationFiled: July 28, 2016Publication date: November 17, 2016Applicant: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Calvin Chan, Jeffrey Neil Margolis, Andrew Pearson, Martin Shetter, Ashraf Ayman Michail, Barry Corlett
-
Patent number: 9443355Abstract: Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image by circuitry within the display.Type: GrantFiled: June 28, 2013Date of Patent: September 13, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Calvin Chan, Jeffrey Neil Margolis, Andrew Pearson, Martin Shetter, Ashraf Ayman Michail, Barry Corlett
-
Patent number: 9417692Abstract: Techniques are provided for rendering, in a see-through, near-eye mixed reality display, a virtual object within a virtual hole, window or cutout. The virtual hole, window or cutout may appear to be within some real world physical object such as a book, table, etc. The virtual object may appear to be just below the surface of the physical object. In a sense, the virtual world could be considered to be a virtual container that provides developers with additional locations for presenting virtual objects. For example, rather than rendering a virtual object, such as a lamp, in a mixed reality display such that appears to sit on top of a real world desk, the virtual object is rendered such that it appears to be located below the surface of the desk.Type: GrantFiled: June 29, 2012Date of Patent: August 16, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Mathew J. Lamb, Ben J. Sugden, Robert L. Crocco, Jr., Brian E. Keane, Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Alex Aben-Athar Kipman, Jeffrey Neil Margolis
-
Publication number: 20160086382Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.Type: ApplicationFiled: September 25, 2015Publication date: March 24, 2016Inventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
-
Patent number: 9230473Abstract: A head-mounted display (HMD) device is provided with reduced motion blur by reducing row duty cycle for an organic light-emitting diode (LED) panel as a function of a detected movement of a user's head. Further, a panel duty cycle of the panel is increased in concert with the decrease in the row duty cycle to maintain a constant brightness. The technique is applicable, e.g., to scenarios in which an augmented reality image is displayed in a specific location in world coordinates. A sensor such as an accelerometer or gyroscope can be used to obtain an angular velocity of a user's head. The angular velocity indicates a number of pixels subtended in a frame period according to an angular resolution of the LED panel. The duty cycles can be set, e.g., once per frame, based on the angular velocity or the number of pixels subtended in a frame period.Type: GrantFiled: June 24, 2013Date of Patent: January 5, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Jeffrey Neil Margolis, Barry Corlett
-
Publication number: 20150370528Abstract: An audio/visual system (e.g., such as an entertainment console or other computing device) plays a base audio track, such as a portion of a pre-recorded song or notes from one or more instruments. Using a depth camera or other sensor, the system automatically detects that a user (or a portion of the user) enters a first collision volume of a plurality of collision volumes. Each collision volume of the plurality of collision volumes is associated with a different audio stem. In one example, an audio stem is a sound from a subset of instruments playing a song, a portion of a vocal track for a song, or notes from one or more instruments. In response to automatically detecting that the user (or a portion of the user) entered the first collision volume, the appropriate audio stem associated with the first collision volume is added to the base audio track or removed from the base audio track.Type: ApplicationFiled: August 31, 2015Publication date: December 24, 2015Inventors: Jason Flaks, Rudy Jacobus Poot, Alex Aben-Athar Kipman, Chris Miles, Andrew John Fuller, Jeffrey Neil Margolis
-
Patent number: 9153195Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.Type: GrantFiled: January 30, 2012Date of Patent: October 6, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
-
Patent number: 9123316Abstract: An audio/visual system (e.g., such as an entertainment console or other computing device) plays a base audio track, such as a portion of a pre-recorded song or notes from one or more instruments. Using a depth camera or other sensor, the system automatically detects that a user (or a portion of the user) enters a first collision volume of a plurality of collision volumes. Each collision volume of the plurality of collision volumes is associated with a different audio stem. In one example, an audio stem is a sound from a subset of instruments playing a song, a portion of a vocal track for a song, or notes from one or more instruments. In response to automatically detecting that the user (or a portion of the user) entered the first collision volume, the appropriate audio stem associated with the first collision volume is added to the base audio track or removed from the base audio track.Type: GrantFiled: December 27, 2010Date of Patent: September 1, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Jason Flaks, Rudy Jacobus Poot, Alex Aben-Athar Kipman, Chris Miles, Andrew John Fuller, Jeffrey Neil Margolis
-
Publication number: 20150029218Abstract: Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image.Type: ApplicationFiled: July 25, 2013Publication date: January 29, 2015Inventors: Oliver Michael Christian Williams, Paul Barham, Michael Isard, Tuan Wong, Kevin Woo, Georg Klein, Douglas Kevin Service, Ashraf Ayman Michail, Andrew Pearson, Martin Shetter, Jeffrey Neil Margolis, Nathan Ackerman, Calvin Chan, Arthur C. Tomlin
-
Publication number: 20150002542Abstract: Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image by circuitry within the display.Type: ApplicationFiled: June 28, 2013Publication date: January 1, 2015Inventors: Calvin Chan, Jeffrey Neil Margolis, Andrew Pearson, Martin Shetter, Ashraf Ayman Michail, Barry Corlett
-
Publication number: 20140375679Abstract: A head-mounted display (HMD) device is provided with reduced motion blur by reducing row duty cycle for an organic light-emitting diode (LED) panel as a function of a detected movement of a user's head. Further, a panel duty cycle of the panel is increased in concert with the decrease in the row duty cycle to maintain a constant brightness. The technique is applicable, e.g., to scenarios in which an augmented reality image is displayed in a specific location in world coordinates. A sensor such as an accelerometer or gyroscope can be used to obtain an angular velocity of a user's head. The angular velocity indicates a number of pixels subtended in a frame period according to an angular resolution of the LED panel. The duty cycles can be set, e.g., once per frame, based on the angular velocity or the number of pixels subtended in a frame period.Type: ApplicationFiled: June 24, 2013Publication date: December 25, 2014Inventors: Jeffrey Neil Margolis, Barry Corlett
-
Patent number: 8884984Abstract: A system that includes a head mounted display device and a processing unit connected to the head mounted display device is used to fuse virtual content into real content. In one embodiment, the processing unit is in communication with a hub computing device. The system creates a volumetric model of a space, segments the model into objects, identifies one or more of the objects including a first object, and displays a virtual image over the first object on a display (of the head mounted display) that allows actual direct viewing of at least a portion of the space through the display.Type: GrantFiled: October 15, 2010Date of Patent: November 11, 2014Assignee: Microsoft CorporationInventors: Jason Flaks, Avi Bar-Zeev, Jeffrey Neil Margolis, Chris Miles, Alex Aben-Athar Kipman, Andrew John Fuller, Bob Crocco, Jr.
-
Publication number: 20140002491Abstract: Techniques are provided for rendering, in a see-through, near-eye mixed reality display, a virtual object within a virtual hole, window or cutout. The virtual hole, window or cutout may appear to be within some real world physical object such as a book, table, etc. The virtual object may appear to be just below the surface of the physical object. In a sense, the virtual world could be considered to be a virtual container that provides developers with additional locations for presenting virtual objects. For example, rather than rendering a virtual object, such as a lamp, in a mixed reality display such that appears to sit on top of a real world desk, the virtual object is rendered such that it appears to be located below the surface of the desk.Type: ApplicationFiled: June 29, 2012Publication date: January 2, 2014Inventors: Mathew J. Lamb, Ben J. Sugden, Robert L. Crocco, JR., Brian E. Keane, Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Alex Aben-Athar Kipman, Jeffrey Neil Margolis
-
Publication number: 20130044130Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.Type: ApplicationFiled: January 30, 2012Publication date: February 21, 2013Inventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
-
Publication number: 20120165964Abstract: An audio/visual system (e.g., such as an entertainment console or other computing device) plays a base audio track, such as a portion of a pre-recorded song or notes from one or more instruments. Using a depth camera or other sensor, the system automatically detects that a user (or a portion of the user) enters a first collision volume of a plurality of collision volumes. Each collision volume of the plurality of collision volumes is associated with a different audio stem. In one example, an audio stem is a sound from a subset of instruments playing a song, a portion of a vocal track for a song, or notes from one or more instruments. In response to automatically detecting that the user (or a portion of the user) entered the first collision volume, the appropriate audio stem associated with the first collision volume is added to the base audio track or removed from the base audio track.Type: ApplicationFiled: December 27, 2010Publication date: June 28, 2012Applicant: MICROSOFT CORPORATIONInventors: Jason Flaks, Rudy Jacobus Poot, Alex Aben-Athar Kipman, Chris Miles, Andrew John Fuller, Jeffrey Neil Margolis
-
Publication number: 20120092328Abstract: A system that includes a head mounted display device and a processing unit connected to the head mounted display device is used to fuse virtual content into real content. In one embodiment, the processing unit is in communication with a hub computing device. The system creates a volumetric model of a space, segments the model into objects, identifies one or more of the objects including a first object, and displays a virtual image over the first object on a display (of the head mounted display) that allows actual direct viewing of at least a portion of the space through the display.Type: ApplicationFiled: October 15, 2010Publication date: April 19, 2012Inventors: Jason Flaks, Avi Bar-Zeev, Jeffrey Neil Margolis, Chris Miles, Alex Aben-Athar Kipman, Andrew John Fuller, Bob Crocco, JR.
-
Patent number: 7779367Abstract: A toolbar displays dynamically configured controls based on a size of a window in which an application is running and on the media type presented. A large set of available controls may be available for the toolbar; however, the size of the application window in which the application is running may not be able to display all of the available controls in the window in such as way as to be comfortably used by a user. Accordingly, the controls may be scaled, filtered, and interchanged according to the “real estate” available in the application window, as well as other contextual aspects of the application, to provide a more user-friendly experience.Type: GrantFiled: February 8, 2007Date of Patent: August 17, 2010Assignee: Microsoft CorporationInventors: Marc Seiji Oshiro, William Hong Vong, Jeffrey Neil Margolis, Veronica Law
-
Publication number: 20080195951Abstract: A toolbar displays dynamically configured controls based on a size of a window in which an application is running and on the media type presented. A large set of available controls may be available for the toolbar; however, the size of the application window in which the application is running may not be able to display all of the available controls in the window in such as way as to be comfortably used by a user. Accordingly, the controls may be scaled, filtered, and interchanged according to the “real estate” available in the application window, as well as other contextual aspects of the application, to provide a more user-friendly experience.Type: ApplicationFiled: February 8, 2007Publication date: August 14, 2008Applicant: Microsoft CorporationInventors: Marc Seiji Oshiro, William Hong Vong, Jeffrey Neil Margolis, Veronica Law