Patents by Inventor James MacKenzie

James MacKenzie has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11887263
    Abstract: In one embodiment, a computing device may determine a virtual content to be displayed with a scene of a real-world environment. The device may generate an image depicting the virtual content. Using one or more sensors, the device may detect characteristics of the scene of the real-world environment. Based on the image and the characteristics of the scene, the device may determine that a visual enhancement is to be applied to the virtual content depicted in the image to enhance a contrast between the depicted virtual content and the scene. The device may generate a visually-enhanced image depicting the virtual content by applying the visual enhancement to the virtual content depicted in the image. The device may display the visually-enhanced image of the virtual content on a display of the computing device, wherein the scene of the real-world environment is visible through the display.
    Type: Grant
    Filed: July 8, 2022
    Date of Patent: January 30, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Charlene Mary Atlas, Romain Bachy, Kevin James MacKenzie, Nathan Matsuda, Thomas Scott Murdison, Ocean Quigley, Jasmine Soria Sears
  • Patent number: 11710467
    Abstract: In one embodiment, a computing system may access an image to be displayed by a display. The system may determine one or more first characteristics associated with a content of the image. The one or more first characteristics may include a contrast level of the content of the image with respect to a background of the image. The system may determine a first display persistence time period for the display to display the image based on the one or more first characteristics associated with the content of the image. The system may configure the display to display the image using the first display persistence time period.
    Type: Grant
    Filed: March 14, 2022
    Date of Patent: July 25, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Alexander Goettker, Thomas Scott Murdison, Kevin James MacKenzie, Larry Seiler
  • Patent number: 11650421
    Abstract: A method may include identifying, by one or more processors, an object in a field of view of a wearable display, where the object is identified for a presbyopic compensation. The presbyopic compensation is performed by the one or more processors on image data of the object to generate compensated image data of the object. The one or more processors render an image in response to the compensated image data of the object on a display of the wearable display.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: May 16, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Ian Erkelens, Larry Richard Moore, Jr., Kevin James MacKenzie
  • Patent number: 11579689
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for selectively rendering augmented reality content based on predictions regarding a user's ability to visually process the augmented reality content. For instance, the disclosed systems can identify eye tracking information for a user at an initial time. Moreover, the disclosed systems can predict a change in an ability of the user to visually process an augmented reality element at a future time based on the eye tracking information. Additionally, the disclosed systems can selectively render the augmented reality element at the future time based on the predicted change in the ability of the user to visually process the augmented reality element.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: February 14, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: Mark Terrano, Ian Erkelens, Kevin James MacKenzie
  • Publication number: 20230037329
    Abstract: Head-mounted display systems may include an eye-tracking subsystem and a fixation distance prediction subsystem. The eye-tracking subsystem may be configured to determine at least a gaze direction of a user's eyes and an eye movement speed of the user's eyes. The fixation distance prediction subsystem may be configured to predict, based on the eye movement speed and the gaze direction of the user's eyes, a fixation distance at which the user's eyes will become fixated prior to the user's eyes reaching a fixation state associated with the predicted fixation distance. Additional methods, systems, and devices are also disclosed.
    Type: Application
    Filed: July 7, 2022
    Publication date: February 9, 2023
    Inventors: Ian Erkelens, Thomas Scott Murdison, Kevin James MacKenzie
  • Publication number: 20220366873
    Abstract: In one embodiment, a computing system may access an image to be displayed by a display. The system may determine one or more first characteristics associated with a content of the image. The one or more first characteristics may include a contrast level of the content of the image with respect to a background of the image. The system may determine a first display persistence time period for the display to display the image based on the one or more first characteristics associated with the content of the image. The system may configure the display to display the image using the first display persistence time period.
    Type: Application
    Filed: March 14, 2022
    Publication date: November 17, 2022
    Inventors: Alexander Goettker, Thomas Scott Murdison, Kevin James MacKenzie, Larry Seiler
  • Patent number: 11423621
    Abstract: In one embodiment, a computing device may determine a virtual content to be displayed with a scene of a real-world environment. The device may generate an image depicting the virtual content. Using one or more sensors, the device may detect characteristics of the scene of the real-world environment. Based on the image and the characteristics of the scene, the device may determine that a visual enhancement is to be applied to the virtual content depicted in the image to enhance a contrast between the depicted virtual content and the scene. The device may generate a visually-enhanced image depicting the virtual content by applying the visual enhancement to the virtual content depicted in the image. The device may display the visually-enhanced image of the virtual content on a display of the computing device, wherein the scene of the real-world environment is visible through the display.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: August 23, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Charlene Mary Atlas, Romain Bachy, Kevin James MacKenzie, Nathan Matsuda, Thomas Scott Murdison, Ocean Quigley, Jasmine Soria Sears
  • Patent number: 11402635
    Abstract: A method may include displaying, to a user, a first color in a first area and a second color in a second area, where (1) the second color has a longer wavelength than the first color and (2) the first and second color have an expected longitudinal chromatic aberration for a human eye. The method may also include receiving, from the user, an indication of whether the user perceives (1) the first area as being clearer than the second area or (2) the second area as being clearer than the first area. The method may further include determining, based on the indication of the user and the expected longitudinal chromatic aberration, information about a refractive error of the user's vision. Various other methods, systems, and devices are also disclosed.
    Type: Grant
    Filed: May 24, 2018
    Date of Patent: August 2, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Marina Zannoli, Kristen Bowles, Ryan Michael Ebert, Douglas Robert Lanman, Kevin James MacKenzie
  • Patent number: 11308920
    Abstract: In one embodiment, a computing system may access an image to be displayed by a display. The system may determine one or more characteristics associated with a content of the image. The one or more characteristics may include a spatial frequency of the content in a spatial frequency domain. The system may determine a display persistence time period for the display to display the image based on the one or more characteristics associated with the content of the image. The system may configure the display to display the image using the display persistence time period.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: April 19, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Alexander Goettker, Thomas Scott Murdison, Kevin James MacKenzie, Larry Seiler
  • Patent number: 11221266
    Abstract: Systems, methods, and computer readable medium are provided for automatically resetting a zero-offset calibration coefficient for a pressure transducer. Ambient pressure measurements from a first pressure sensor and a second pressure sensor can be received by a computing device and compared. Based on determining a difference in the received ambient pressure measurements, an updated zero-offset calibration coefficient can be generated. The updated zero-offset calibration coefficient can be transmitted to the first pressure sensor, which once received, causes the first pressure sensor to update a previously determined zero-offset calibration coefficient with the updated zero-offset calibration coefficient.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: January 11, 2022
    Assignee: BAKER HUGHES OILFIELD OPERATIONS LLC
    Inventors: Colin James Mackenzie, Thomas John Piggin
  • Patent number: 11179887
    Abstract: A method (300) of fabricating an object by additive manufacturing comprises providing (310) a layer of polymeric material (100), said polymeric material (100) being in particulate form, and comprising linear polymer chains, selectively depositing (320) a reactive liquid (200) onto the layer of particulate polymeric material (100), said reactive liquid (200) comprising reactive units (210a) which are monomeric units, linear oligomeric units, linear polymeric units, or combinations thereof, wherein said reactive units (210a) have two or fewer reactive groups, and allowing (330) linear polymeric chains in said layer of polymeric material (100) to react with reactive units in said reactive liquid (200) so as to form extended polymeric chains that are linear, so as to provide a shaped layer of linear polymer. These steps (310, 320, 330) are repeated as required to form the object from successive shaped layers of linear polymer.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: November 23, 2021
    Assignee: The University of Nottingham
    Inventors: Christopher John Tuck, Belén Begines Ruiz, Yinfeng He, Ricky Darren Wildman, Richard James Mackenzie Hague
  • Publication number: 20210223861
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for selectively rendering augmented reality content based on predictions regarding a user's ability to visually process the augmented reality content. For instance, the disclosed systems can identify eye tracking information for a user at an initial time. Moreover, the disclosed systems can predict a change in an ability of the user to visually process an augmented reality element at a future time based on the eye tracking information. Additionally, the disclosed systems can selectively render the augmented reality element at the future time based on the predicted change in the ability of the user to visually process the augmented reality element.
    Type: Application
    Filed: January 25, 2021
    Publication date: July 22, 2021
    Inventors: Mark Terrano, Ian Erkelens, Kevin James MacKenzie
  • Patent number: 10983354
    Abstract: A multiplanar head mounted display (HMD) includes two or more artificial display planes for each eye located at optical distances that can be dynamically adjusted based on a location within a scene presented by the HMD that the user views. For example, a scene is presented on two or more electronic display elements (e.g., screens) of the HMD. A focal length of an optics block that directs image light from the electronic display elements towards the eyes of a user is adjusted using a varifocal system (e.g., an element that mechanically changes a distance between a lens system in the optics block and the electronic display element, an element that changes shape of one or more lenses in the lens system in the optics block, etc.) based on a location or object within the scene where the user is looking.
    Type: Grant
    Filed: January 7, 2020
    Date of Patent: April 20, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Douglas Robert Lanman, William Aaron Nicholls, Marina Zannoli, Kevin James MacKenzie, James Hillis, Yusufu Njoni Bamaxam Sulai, Olivier Mercier
  • Patent number: 10901502
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for selectively rendering augmented reality content based on predictions regarding a user's ability to visually process the augmented reality content. For instance, the disclosed systems can identify eye tracking information for a user at an initial time. Moreover, the disclosed systems can predict a change in an ability of the user to visually process an augmented reality element at a future time based on the eye tracking information. Additionally, the disclosed systems can selectively render the augmented reality element at the future time based on the predicted change in the ability of the user to visually process the augmented reality element.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: January 26, 2021
    Assignee: FACEBOOK, INC.
    Inventors: Mark Terrano, Ian Erkelens, Kevin James MacKenzie
  • Publication number: 20200409457
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for selectively rendering augmented reality content based on predictions regarding a user's ability to visually process the augmented reality content. For instance, the disclosed systems can identify eye tracking information for a user at an initial time. Moreover, the disclosed systems can predict a change in an ability of the user to visually process an augmented reality element at a future time based on the eye tracking information. Additionally, the disclosed systems can selectively render the augmented reality element at the future time based on the predicted change in the ability of the user to visually process the augmented reality element.
    Type: Application
    Filed: June 27, 2019
    Publication date: December 31, 2020
    Inventors: Mark Terrano, Ian Erkelens, Kevin James MacKenzie
  • Patent number: 10866418
    Abstract: A multiplanar head mounted display (HMD) includes two or more artificial display planes for each eye located at optical distances that can be dynamically adjusted based on a location within a scene presented by the HMD that the user views. For example, a scene is presented on two or more electronic display elements (e.g., screens) of the HMD. A focal length of an optics block that directs image light from the electronic display elements towards the eyes of a user is adjusted using a varifocal system (e.g., an element that mechanically changes a distance between a lens system in the optics block and the electronic display element, an element that changes shape of one or more lenses in the lens system in the optics block, etc.) based on a location or object within the scene where the user is looking.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: December 15, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Douglas Robert Lanman, William Aaron Nicholls, Marina Zannoli, Kevin James MacKenzie, James Hillis, Yusufu Njoni Bamaxam Sulai, Olivier Mercier
  • Publication number: 20200370980
    Abstract: Systems, methods, and computer readable medium are provided for automatically resetting a zero-offset calibration coefficient for a pressure transducer. Ambient pressure measurements from a first pressure sensor and a second pressure sensor can be received by a computing device and compared. Based on determining a difference in the received ambient pressure measurements, an updated zero-offset calibration coefficient can be generated. The updated zero-offset calibration coefficient can be transmitted to the first pressure sensor, which once received, causes the first pressure sensor to update a previously determined zero-offset calibration coefficient with the updated zero-offset calibration coefficient.
    Type: Application
    Filed: April 16, 2020
    Publication date: November 26, 2020
    Inventors: Colin James Mackenzie, Thomas John Piggin
  • Patent number: 10789782
    Abstract: A near-eye display (NED) has an orientation detection device and a display block. The orientation detection device collects orientation data that describe an orientation of the NED. The display block has a display assembly, a focusing assembly, and a controller. The controller determines an orientation vector of the NED based in part on the orientation data and computes an angular difference between the orientation vector of the NED and a gravity vector. After comparing the angular difference to a threshold value, the controller generates multifocal instructions that adjusts the optical element to display an augmented scene at the selected image plane corresponding to the multifocal instructions.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: September 29, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Lu Lu, Ji Luo, Nicholas Daniel Trail, Kevin James MacKenzie, Pasi Saarikko, Andrew John Ouderkirk, Scott Charles McEldowney
  • Patent number: 10778954
    Abstract: A multifocal test system is described herein. The system includes a plurality of displays located at different focal distances. Each display includes a plurality of pixels with pixel intensity values. The system includes an eye tracking system that determines eye tracking information about a position of an eye relative to the displays. A controller is configured to determine pixel intensity values based on decomposition of a scene across the plurality of displays, and the position of the eye.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: September 15, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Olivier Mercier, Yusufu Njoni Bamaxam Sulai, Kevin James MacKenzie, Marina Zannoli, James Hillis, Derek Nowrouzezahrai, Douglas Robert Lanman
  • Patent number: 10694166
    Abstract: Systems and methods for displaying an image across a plurality of displays are described herein. Pixel intensity values in the multifocal display are determined using correlation values and numerical iterations. An eye tracking system measures eye tracking information about a position of a user's eye, and the pixel intensity values are modified based on the eye tracking information. An image is displayed on the plurality of displays based on the determined pixel intensity values. The plurality of displays may be within an HMD, and address vergence accommodation conflict by simulating retinal defocus blur.
    Type: Grant
    Filed: January 2, 2020
    Date of Patent: June 23, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Olivier Mercier, Yusufu Njoni Bamaxam Sulai, Kevin James MacKenzie, Marina Zannoli, James Hillis, Derek Nowrouzezahrai, Douglas Robert Lanman