Patents by Inventor Jeffrey Neil Margolis

Jeffrey Neil Margolis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160335806
    Abstract: Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image by circuitry within the display.
    Type: Application
    Filed: July 28, 2016
    Publication date: November 17, 2016
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Calvin Chan, Jeffrey Neil Margolis, Andrew Pearson, Martin Shetter, Ashraf Ayman Michail, Barry Corlett
  • Patent number: 9443355
    Abstract: Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image by circuitry within the display.
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: September 13, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Calvin Chan, Jeffrey Neil Margolis, Andrew Pearson, Martin Shetter, Ashraf Ayman Michail, Barry Corlett
  • Patent number: 9417692
    Abstract: Techniques are provided for rendering, in a see-through, near-eye mixed reality display, a virtual object within a virtual hole, window or cutout. The virtual hole, window or cutout may appear to be within some real world physical object such as a book, table, etc. The virtual object may appear to be just below the surface of the physical object. In a sense, the virtual world could be considered to be a virtual container that provides developers with additional locations for presenting virtual objects. For example, rather than rendering a virtual object, such as a lamp, in a mixed reality display such that appears to sit on top of a real world desk, the virtual object is rendered such that it appears to be located below the surface of the desk.
    Type: Grant
    Filed: June 29, 2012
    Date of Patent: August 16, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mathew J. Lamb, Ben J. Sugden, Robert L. Crocco, Jr., Brian E. Keane, Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Alex Aben-Athar Kipman, Jeffrey Neil Margolis
  • Publication number: 20160086382
    Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.
    Type: Application
    Filed: September 25, 2015
    Publication date: March 24, 2016
    Inventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
  • Patent number: 9230473
    Abstract: A head-mounted display (HMD) device is provided with reduced motion blur by reducing row duty cycle for an organic light-emitting diode (LED) panel as a function of a detected movement of a user's head. Further, a panel duty cycle of the panel is increased in concert with the decrease in the row duty cycle to maintain a constant brightness. The technique is applicable, e.g., to scenarios in which an augmented reality image is displayed in a specific location in world coordinates. A sensor such as an accelerometer or gyroscope can be used to obtain an angular velocity of a user's head. The angular velocity indicates a number of pixels subtended in a frame period according to an angular resolution of the LED panel. The duty cycles can be set, e.g., once per frame, based on the angular velocity or the number of pixels subtended in a frame period.
    Type: Grant
    Filed: June 24, 2013
    Date of Patent: January 5, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jeffrey Neil Margolis, Barry Corlett
  • Publication number: 20150370528
    Abstract: An audio/visual system (e.g., such as an entertainment console or other computing device) plays a base audio track, such as a portion of a pre-recorded song or notes from one or more instruments. Using a depth camera or other sensor, the system automatically detects that a user (or a portion of the user) enters a first collision volume of a plurality of collision volumes. Each collision volume of the plurality of collision volumes is associated with a different audio stem. In one example, an audio stem is a sound from a subset of instruments playing a song, a portion of a vocal track for a song, or notes from one or more instruments. In response to automatically detecting that the user (or a portion of the user) entered the first collision volume, the appropriate audio stem associated with the first collision volume is added to the base audio track or removed from the base audio track.
    Type: Application
    Filed: August 31, 2015
    Publication date: December 24, 2015
    Inventors: Jason Flaks, Rudy Jacobus Poot, Alex Aben-Athar Kipman, Chris Miles, Andrew John Fuller, Jeffrey Neil Margolis
  • Patent number: 9153195
    Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.
    Type: Grant
    Filed: January 30, 2012
    Date of Patent: October 6, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
  • Patent number: 9123316
    Abstract: An audio/visual system (e.g., such as an entertainment console or other computing device) plays a base audio track, such as a portion of a pre-recorded song or notes from one or more instruments. Using a depth camera or other sensor, the system automatically detects that a user (or a portion of the user) enters a first collision volume of a plurality of collision volumes. Each collision volume of the plurality of collision volumes is associated with a different audio stem. In one example, an audio stem is a sound from a subset of instruments playing a song, a portion of a vocal track for a song, or notes from one or more instruments. In response to automatically detecting that the user (or a portion of the user) entered the first collision volume, the appropriate audio stem associated with the first collision volume is added to the base audio track or removed from the base audio track.
    Type: Grant
    Filed: December 27, 2010
    Date of Patent: September 1, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jason Flaks, Rudy Jacobus Poot, Alex Aben-Athar Kipman, Chris Miles, Andrew John Fuller, Jeffrey Neil Margolis
  • Publication number: 20150029218
    Abstract: Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image.
    Type: Application
    Filed: July 25, 2013
    Publication date: January 29, 2015
    Inventors: Oliver Michael Christian Williams, Paul Barham, Michael Isard, Tuan Wong, Kevin Woo, Georg Klein, Douglas Kevin Service, Ashraf Ayman Michail, Andrew Pearson, Martin Shetter, Jeffrey Neil Margolis, Nathan Ackerman, Calvin Chan, Arthur C. Tomlin
  • Publication number: 20150002542
    Abstract: Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image by circuitry within the display.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 1, 2015
    Inventors: Calvin Chan, Jeffrey Neil Margolis, Andrew Pearson, Martin Shetter, Ashraf Ayman Michail, Barry Corlett
  • Publication number: 20140375679
    Abstract: A head-mounted display (HMD) device is provided with reduced motion blur by reducing row duty cycle for an organic light-emitting diode (LED) panel as a function of a detected movement of a user's head. Further, a panel duty cycle of the panel is increased in concert with the decrease in the row duty cycle to maintain a constant brightness. The technique is applicable, e.g., to scenarios in which an augmented reality image is displayed in a specific location in world coordinates. A sensor such as an accelerometer or gyroscope can be used to obtain an angular velocity of a user's head. The angular velocity indicates a number of pixels subtended in a frame period according to an angular resolution of the LED panel. The duty cycles can be set, e.g., once per frame, based on the angular velocity or the number of pixels subtended in a frame period.
    Type: Application
    Filed: June 24, 2013
    Publication date: December 25, 2014
    Inventors: Jeffrey Neil Margolis, Barry Corlett
  • Patent number: 8884984
    Abstract: A system that includes a head mounted display device and a processing unit connected to the head mounted display device is used to fuse virtual content into real content. In one embodiment, the processing unit is in communication with a hub computing device. The system creates a volumetric model of a space, segments the model into objects, identifies one or more of the objects including a first object, and displays a virtual image over the first object on a display (of the head mounted display) that allows actual direct viewing of at least a portion of the space through the display.
    Type: Grant
    Filed: October 15, 2010
    Date of Patent: November 11, 2014
    Assignee: Microsoft Corporation
    Inventors: Jason Flaks, Avi Bar-Zeev, Jeffrey Neil Margolis, Chris Miles, Alex Aben-Athar Kipman, Andrew John Fuller, Bob Crocco, Jr.
  • Publication number: 20140002491
    Abstract: Techniques are provided for rendering, in a see-through, near-eye mixed reality display, a virtual object within a virtual hole, window or cutout. The virtual hole, window or cutout may appear to be within some real world physical object such as a book, table, etc. The virtual object may appear to be just below the surface of the physical object. In a sense, the virtual world could be considered to be a virtual container that provides developers with additional locations for presenting virtual objects. For example, rather than rendering a virtual object, such as a lamp, in a mixed reality display such that appears to sit on top of a real world desk, the virtual object is rendered such that it appears to be located below the surface of the desk.
    Type: Application
    Filed: June 29, 2012
    Publication date: January 2, 2014
    Inventors: Mathew J. Lamb, Ben J. Sugden, Robert L. Crocco, JR., Brian E. Keane, Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Alex Aben-Athar Kipman, Jeffrey Neil Margolis
  • Publication number: 20130044130
    Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.
    Type: Application
    Filed: January 30, 2012
    Publication date: February 21, 2013
    Inventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
  • Publication number: 20120165964
    Abstract: An audio/visual system (e.g., such as an entertainment console or other computing device) plays a base audio track, such as a portion of a pre-recorded song or notes from one or more instruments. Using a depth camera or other sensor, the system automatically detects that a user (or a portion of the user) enters a first collision volume of a plurality of collision volumes. Each collision volume of the plurality of collision volumes is associated with a different audio stem. In one example, an audio stem is a sound from a subset of instruments playing a song, a portion of a vocal track for a song, or notes from one or more instruments. In response to automatically detecting that the user (or a portion of the user) entered the first collision volume, the appropriate audio stem associated with the first collision volume is added to the base audio track or removed from the base audio track.
    Type: Application
    Filed: December 27, 2010
    Publication date: June 28, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Jason Flaks, Rudy Jacobus Poot, Alex Aben-Athar Kipman, Chris Miles, Andrew John Fuller, Jeffrey Neil Margolis
  • Publication number: 20120092328
    Abstract: A system that includes a head mounted display device and a processing unit connected to the head mounted display device is used to fuse virtual content into real content. In one embodiment, the processing unit is in communication with a hub computing device. The system creates a volumetric model of a space, segments the model into objects, identifies one or more of the objects including a first object, and displays a virtual image over the first object on a display (of the head mounted display) that allows actual direct viewing of at least a portion of the space through the display.
    Type: Application
    Filed: October 15, 2010
    Publication date: April 19, 2012
    Inventors: Jason Flaks, Avi Bar-Zeev, Jeffrey Neil Margolis, Chris Miles, Alex Aben-Athar Kipman, Andrew John Fuller, Bob Crocco, JR.
  • Patent number: 7779367
    Abstract: A toolbar displays dynamically configured controls based on a size of a window in which an application is running and on the media type presented. A large set of available controls may be available for the toolbar; however, the size of the application window in which the application is running may not be able to display all of the available controls in the window in such as way as to be comfortably used by a user. Accordingly, the controls may be scaled, filtered, and interchanged according to the “real estate” available in the application window, as well as other contextual aspects of the application, to provide a more user-friendly experience.
    Type: Grant
    Filed: February 8, 2007
    Date of Patent: August 17, 2010
    Assignee: Microsoft Corporation
    Inventors: Marc Seiji Oshiro, William Hong Vong, Jeffrey Neil Margolis, Veronica Law
  • Publication number: 20080195951
    Abstract: A toolbar displays dynamically configured controls based on a size of a window in which an application is running and on the media type presented. A large set of available controls may be available for the toolbar; however, the size of the application window in which the application is running may not be able to display all of the available controls in the window in such as way as to be comfortably used by a user. Accordingly, the controls may be scaled, filtered, and interchanged according to the “real estate” available in the application window, as well as other contextual aspects of the application, to provide a more user-friendly experience.
    Type: Application
    Filed: February 8, 2007
    Publication date: August 14, 2008
    Applicant: Microsoft Corporation
    Inventors: Marc Seiji Oshiro, William Hong Vong, Jeffrey Neil Margolis, Veronica Law