Patents by Inventor Jaron Lanier

Jaron Lanier has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9013564
    Abstract: An autostereoscopic 3D display system includes a display having a plurality of pixels, wherein each pixel is configured to display light rays representing a left-eye view and a right-eye view of an image. The autostereoscopic 3D display system further includes an optical-deflection system configured to control the light rays representing the left-eye view and the right-eye view. The optical-deflection system includes a separately controllable lenslet associated with each pixel, where the lenslet is configured to steer the light ray representing the left-eye view corresponding to the pixel, and steer the light ray representing the right-eye view corresponding to the pixel.
    Type: Grant
    Filed: May 7, 2013
    Date of Patent: April 21, 2015
    Assignee: Elwha LLC
    Inventors: Steven Bathiche, Alistair K. Chan, William Gates, Roderick A. Hyde, Edward K. Y. Jung, Jordin T. Kare, Jaron Lanier, John L. Manferdelli, Clarence T. Tegreene, David B. Tuckerman, Charles Whitmer, Lowell L. Wood, Jr.
  • Publication number: 20140376785
    Abstract: A system for enhancing a facial expression includes a processing circuit is configured to receive video of a user, generate facial data corresponding to a face of the user, analyze the facial data to identify a facial expression, enhance the facial data based on the facial expression, and output modified video including the enhanced facial data.
    Type: Application
    Filed: June 20, 2013
    Publication date: December 25, 2014
    Inventors: Steven Bathiche, Alistair K. Chan, William David Duncan, William Gates, Roderick A. Hyde, Edward K.Y. Jung, Jordin T. Kare, Jaron Lanier, John L. Manferdelli, Clarence T. Tegreene, Adrian Travis, Charles Whitmer, Victoria Y.H. Wood, Lowell L. Wood, JR.
  • Publication number: 20140267228
    Abstract: An augmented reality (AR) experience is mapped to various environments. A three-dimensional data model that describes a scene of an environment, and a description of the AR experience, are input. The AR experience description includes a set of digital content that is to be mapped into the scene, and a set of constraints that defines attributes of the digital content when it is mapped into the scene. The 3D data model is analyzed to detect affordances in the scene, where this analysis generates a list of detected affordances. The list of detected affordances and the set of constraints are used to solve for a mapping of the set of digital content into the scene that substantially satisfies the set of constraints. The AR experience is also mapped to changing environments.
    Type: Application
    Filed: March 14, 2013
    Publication date: September 18, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Eyal Ofek, Ran Gal, Douglas Burger, Jaron Lanier
  • Publication number: 20140268277
    Abstract: A display device includes a waveguide, a reconfigurable phase mask, and a controller. The controller dynamically reconfigures the reconfigurable phase mask so as to modulate display light in accordance with a detected position of an eye and/or a parameter for a shape of the waveguide. The waveguide transmits the modulated display light.
    Type: Application
    Filed: March 14, 2013
    Publication date: September 18, 2014
    Inventors: Andreas Georgiou, Joel Kollin, Adrian Travis, Stephen Heil, Jaron Lanier, Doug Burger
  • Patent number: 8839358
    Abstract: Progressive authentication is generally employed to establish the authenticity of a user, such as a user of a computing device, or a user that wants to access a proprietary data item, software application or on-line service. This can entail inputting authentication factors each of which corresponds to one or multiple attributes associated with the user, or historical patterns of one or more attributes associated with the user, or both, and a confidence level that estimates a reliability of the factor. Sensor readings captured by one or more sensors are also input. Each sensor senses a user attribute and are used to quantify each authentication factor confidence level. An overall confidence level is established based at least in part on a combination of the individual confidence levels. A user is then designated as being authentic whenever the established overall confidence level exceeds a prescribed authentication level.
    Type: Grant
    Filed: August 31, 2011
    Date of Patent: September 16, 2014
    Assignee: Microsoft Corporation
    Inventors: Karin Strauss, Oriana Riva, Douglas Burger, Jaron Lanier
  • Publication number: 20140200079
    Abstract: A method of displaying visual information to different viewer-eyes includes receiving eye strength data indicative of a deficiency of a weak viewer-eye with respect to a dominant viewer-eye. The method further includes causing a 3D-display system to display a first perspective of an image to the weak viewer-eye and causing the 3D-display system to display a second perspective of the image to the dominant viewer-eye.
    Type: Application
    Filed: January 16, 2013
    Publication date: July 17, 2014
    Applicant: Elwha LLC
    Inventors: Steven Bathiche, Alistair K. Chan, William Gates, Roderick A. Hyde, Muriel Y. Ishikawa, Edward K.Y. Jung, Jordin T. Kare, Jaron Lanier, John L. Manferdelli, Clarence T. Tegreene, Adrian Travis, Charles Whitmer, Lowell L. Wood, JR.
  • Publication number: 20140198297
    Abstract: A method for treating a weak viewer-eye includes the steps of receiving eye-strength data indicative of an eye-strength of the weak viewer-eye and causing a 3D display system to vary, in accordance with the eye-strength of the weak viewer-eye, display characteristics of a perspective that the 3D display system displays.
    Type: Application
    Filed: January 16, 2013
    Publication date: July 17, 2014
    Applicant: Elwha LLC
    Inventors: Steven Bathiche, Alistair K. Chan, William Gates, Roderick A. Hyde, Edward K.Y. Jung, Jordin T. Kare, Jaron Lanier, John L. Manferdelli, Clarence T. Tegreene, Adrian Travis, David B. Tuckerman, Charles Whitmer, Lowell L. Wood, JR., Victoria Y.H. Wood
  • Patent number: 8754831
    Abstract: Embodiments that relate facilitating the viewing of images on a mobile device are disclosed. For example, one disclosed embodiment provides a mobile device including a display screen and an image display system configured to selectively switch between a first viewing mode in which an image comprising a first amount of visual information is displayed at a first apparent distance from the display screen and a second viewing mode in which an image comprising a second, different amount of visual information is displayed at a second apparent distance from the display screen. The mobile device further includes a controller in communication with the image display system, wherein the controller is configured to switch between the first viewing mode and the second viewing mode.
    Type: Grant
    Filed: August 2, 2011
    Date of Patent: June 17, 2014
    Assignee: Microsoft Corporation
    Inventors: Joel S. Kollin, Jaron Lanier
  • Publication number: 20130296682
    Abstract: Embodiments are disclosed that relate to the integration of pre-surgical images and surgical images. For example, one disclosed embodiment provides, on a computing system, a method including receiving a pre-surgical image of a patient, receiving a depth image of the patient during surgery, and comparing the depth image of the patient to the pre-surgical image of the patient. The method further comprises providing an output based upon a result of comparing the depth image of the patient to the pre-surgical image of the patient.
    Type: Application
    Filed: May 4, 2012
    Publication date: November 7, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: John Clavin, Jaron Lanier
  • Publication number: 20130252216
    Abstract: Embodiments related to an interactive physical therapy experience are disclosed. One embodiment provides a computing device configured to receive, from an administrator client, an assigned exercise list comprising one or more assigned exercises to be performed by a user. The computing device is further configured to send, to a user client, one or more exercise modules, each of the exercise modules representing one of the assigned exercises. The computing device is further configured to receive prescription tracking data representing performance of the one or more assigned exercises by the user, and provide feedback to the administrator client based on the prescription tracking data.
    Type: Application
    Filed: March 20, 2012
    Publication date: September 26, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: John Clavin, Jaron Lanier
  • Publication number: 20130055348
    Abstract: Progressive authentication is generally employed to establish the authenticity of a user, such as a user of a computing device, or a user that wants to access a proprietary data item, software application or on-line service. This can entail inputting authentication factors each of which corresponds to one or multiple attributes associated with the user, or historical patterns of one or more attributes associated with the user, or both, and a confidence level that estimates a reliability of the factor. Sensor readings captured by one or more sensors are also input. Each sensor senses a user attribute and are used to quantify each authentication factor confidence level. An overall confidence level is established based at least in part on a combination of the individual confidence levels. A user is then designated as being authentic whenever the established overall confidence level exceeds a prescribed authentication level.
    Type: Application
    Filed: August 31, 2011
    Publication date: February 28, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Karin Strauss, Oriana Riva, Douglas Burger, Jaron Lanier
  • Publication number: 20130033485
    Abstract: Embodiments that relate facilitating the viewing of images on a mobile device are disclosed. For example, one disclosed embodiment provides a mobile device including a display screen and an image display system configured to selectively switch between a first viewing mode in which an image comprising a first amount of visual information is displayed at a first apparent distance from the display screen and a second viewing mode in which an image comprising a second, different amount of visual information is displayed at a second apparent distance from the display screen. The mobile device further includes a controller in communication with the image display system, wherein the controller is configured to switch between the first viewing mode and the second viewing mode.
    Type: Application
    Filed: August 2, 2011
    Publication date: February 7, 2013
    Applicant: Microsoft Corporation
    Inventors: Joel S. Kollin, Jaron Lanier
  • Publication number: 20120233198
    Abstract: Virtual worlds are generated from pre-existing data structures containing non-geometric data. An existing data structure containing non-geometric data is access and queried to identify parameters and the dependency structure of data in the data structure. Geometric objects are designed based on the identified parameters and dependency structure, and a virtual world is created from the geometric objects.
    Type: Application
    Filed: March 10, 2011
    Publication date: September 13, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: JARON LANIER, EYAL OFEK, JOHN CLAVIN
  • Publication number: 20100039380
    Abstract: A system that includes a desk top assembly of a display and sensors mounted on a robotic arm. The arm moves the assembly so that it remains within position and orientation tolerances relative to the user's head as the user looks around. Near-field speaker arrays supply audio and a microphone array senses a user's voice. Filters are applied to head motion to reduce latency for arm's tracking of the head. The system is full duplex with other systems allowing immersive collaboration. Lighting and sound generation take place close to the user's head. A haptic interface device allows the user to grab the display/sensor array and move it about. Motion acts as a planar selection device for 3D data. Planar force feedback allows a user to “feel” the data. Users see not only each other through display windows, but can also see the positions and orientations of each others' planar selections of shared 3D models or data.
    Type: Application
    Filed: October 22, 2009
    Publication date: February 18, 2010
    Applicant: GRAPHICS PROPERTIES HOLDINGS, INC.
    Inventor: Jaron Lanier
  • Patent number: 7626569
    Abstract: A system that includes a desk top assembly of a display and sensors mounted on a robotic arm. The arm moves the assembly so that it remains within position and orientation tolerances relative to the user's head as the user looks around. Near-field speaker arrays supply audio and a microphone array senses a user's voice. Filters are applied to head motion to reduce latency for arm's tracking of the head. The system is full duplex with other systems allowing immersive collaboration. Lighting and sound generation take place close to the user's head. A haptic interface device allows the user to grab the display/sensor array and move it about. Motion acts as a planar selection device for 3D data. Planar force feedback allows a user to “feel” the data. Users see not only each other through display windows, but can also see the positions and orientations of each others' planar selections of shared 3D models or data.
    Type: Grant
    Filed: October 24, 2005
    Date of Patent: December 1, 2009
    Assignee: Graphics Properties Holdings, Inc.
    Inventor: Jaron Lanier
  • Publication number: 20060119572
    Abstract: A system that includes a desk top assembly of a display and sensors mounted on a robotic arm. The arm moves the assembly so that it remains within position and orientation tolerances relative to the user's head as the user looks around. Near-field speaker arrays supply audio and a microphone array senses a user's voice. Filters are applied to head motion to reduce latency for arm's tracking of the head. The system is full duplex with other systems allowing immersive collaboration. Lighting and sound generation take place close to the user's head. A haptic interface device allows the user to grab the display/sensor array and move it about. Motion acts as a planar selection device for 3D data. Planar force feedback allows a user to “feel” the data. Users see not only each other through display windows, but can also see the positions and orientations of each others' planar selections of shared 3D models or data.
    Type: Application
    Filed: October 24, 2005
    Publication date: June 8, 2006
    Inventor: Jaron Lanier
  • Patent number: 6400374
    Abstract: A graphic image system comprising a video camera producing a first video signal defining a first image including a foreground object and a background, the foreground object preferably including an image of a human subject having a head with a face; an image position estimating system for identifying a position with respect to said foreground object, e.g., the head, the foreground object having features in constant physical relation to the position; and a computer, responsive to the position estimating system, for defining a mask region separating the foreground object from said background. The computer generates a second video signal including a portion corresponding to the mask region, responsive to said position estimating system, which preferably includes a character having a mask outline.
    Type: Grant
    Filed: September 18, 1996
    Date of Patent: June 4, 2002
    Assignee: Eyematic Interfaces, Inc.
    Inventor: Jaron Lanier
  • Publication number: 20020018070
    Abstract: A graphic image system comprising a video camera producing a first video signal defining a first image including a foreground object and a background, the foreground object preferably including an image of a human subject having a head with a face; an image position estimating system for identifying a position with respect to said foreground object, e.g., the head, the foreground object having features in constant physical relation to the position; and a computer, responsive to the position estimating system, for defining a mask region separating the foreground object from said background. The computer generates a second video signal including a portion corresponding to the mask region, responsive to said position estimating system, which preferably includes a character having a mask outline.
    Type: Application
    Filed: September 18, 1996
    Publication date: February 14, 2002
    Inventor: JARON LANIER