Patents by Inventor Alan McKay Moss

Alan McKay Moss has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200186779
    Abstract: Methods and apparatus for using selective resolution reduction on images to be transmitted and/or used by a playback device are described. Prior to transmission one or more images of an environment are captured. Based on image content, motion detection and/or user input a resolution reduction operation is selected and performed. The reduced resolution image is communicated to a playback device along with information indicating a UV map corresponding to the selected resolution allocation that should be used by the playback device for rendering the communicated image. By changing the resolution allocation used and which UV map is used by the playback device different resolution allocations can be made with respect to different portions of the environment while allowing the number of pixels in transmitted images to remain constant. The playback device renders the individual images with the UV map corresponding to the resolution allocation used to generate the individual images.
    Type: Application
    Filed: December 9, 2019
    Publication date: June 11, 2020
    Inventors: David Cole, Alan McKay Moss, Hector M Medina
  • Patent number: 10681333
    Abstract: Methods and apparatus relating to encoding and decoding stereoscopic (3D) image data, e.g., left and right eye images, are described. Various pre-encoding and post-decoding operations are described in conjunction with difference based encoding and decoding techniques. In some embodiments left and right eye image data is subject to scaling, transform operation(s) and cropping prior to encoding. In addition, in some embodiments decoded left and right eye image data is subject to scaling, transform operations(s) and filling operations prior to being output to a display device. Transform information and/or scaling information may be included in a bitstream communicating encoded left and right eye images. The amount of scaling can be the same for an entire scene and/or program.
    Type: Grant
    Filed: August 1, 2016
    Date of Patent: June 9, 2020
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss
  • Patent number: 10574962
    Abstract: Methods and apparatus for receiving content including images of surfaces of an environment visible from a default viewing position and images of surfaces not visible from the default viewing position, e.g., occluded surfaces, are described. Occluded and non-occluded image portions are received in content streams that can be in a variety of stream formats. In one stream format non-occluded image content is packed into a frame with occluded image content with the occluded image content normally occupying a small portion of the frame. In other embodiments occluded image portions are received in an auxiliary data stream which is multiplexed with a data stream providing frames of non-occluded image content. UV maps which are used to map received image content to segments of an environmental model are also supplied with the UV maps corresponding to the format of the frames which are used to provide the images that serve as textures.
    Type: Grant
    Filed: March 1, 2016
    Date of Patent: February 25, 2020
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss
  • Patent number: 10567733
    Abstract: Methods and apparatus for packing images into a frame and/or including additional content and/or graphics are described. A composite image is generated including at least one image in addition to another image and/or additional image content. A playback device received an encoded frame including a captured image of a portion of and environment and the additional image content. The additional image content is combined with or used to replace a portion of the image of the environment during rendering. Alpha value mask information is communicated to the playback device to provide alpha values for use in image combining. Alpha values are communicated as pixel values in the encoded frame or as additional information. One or more mesh models and/or information on how to map image content to the one or more mesh models is communicated to the playback device for use in rendering image content recovered from a frame.
    Type: Grant
    Filed: March 9, 2018
    Date of Patent: February 18, 2020
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss, Hector M Medina, Ryan Michael Sheridan
  • Publication number: 20200053341
    Abstract: Methods and apparatus for collecting user feedback information from viewers of content are described. Feedback information is received from viewers of content. The feedback indicates, based on head tracking information in some embodiments, where users are looking in a simulated environment during different times of a content presentation, e.g., different frame times. The feedback information is used to prioritize different portions of an environment represented by the captured image content. Resolution allocation is performed based on the feedback information and the content re-encoded based on the resolution allocation. The resolution allocation may and normally does change as the priority of different portions of the environment change.
    Type: Application
    Filed: July 23, 2019
    Publication date: February 13, 2020
    Inventors: David Cole, Alan McKay Moss, Hector M. Medina
  • Patent number: 10531071
    Abstract: A camera rig including one or more stereoscopic camera pairs and/or one or more light field cameras are described. Images are captured by the light field cameras and stereoscopic camera pairs are captured at the same time. The light field images are used to generate an environmental depth map which accurately reflects the environment in which the stereoscopic images are captured at the time of image capture. In addition to providing depth information, images captured by the light field camera or cameras is combined with or used in place of stereoscopic image data to allow viewing and/or display of portions of a scene not captured by a stereoscopic camera pair.
    Type: Grant
    Filed: November 25, 2015
    Date of Patent: January 7, 2020
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss, Hector M. Medina
  • Patent number: 10523920
    Abstract: Stereoscopic image processing methods and apparatus are described. Left and right eye images of a stereoscopic frame are examined to determine if one or more difference reduction operations designed to reduce the luminance and/or chrominance differences between the left and right frames is within a range used to trigger a difference reduction operation. A difference reduction operation may involve assigning portions of the left and right frames to different depth regions and/or other region categories. A decision on whether or not to perform a difference reduction operation is then performed on a per regions basis with at the difference between the left and right eye portions of at least one region being reduced when a difference reduction operation is to be performed. The difference reduction process may, and in some embodiments is, performed in a precoder which processes left and right eye images of stereoscopic frames prior to stereoscopic encoding.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: December 31, 2019
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss
  • Patent number: 10506215
    Abstract: Methods and apparatus for using selective resolution reduction on images to be transmitted and/or used by a playback device are described. Prior to transmission one or more images of an environment are captured. Based on image content, motion detection and/or user input a resolution reduction operation is selected and performed. The reduced resolution image is communicated to a playback device along with information indicating a UV map corresponding to the selected resolution allocation that should be used by the playback device for rendering the communicated image. By changing the resolution allocation used and which UV map is used by the playback device different resolution allocations can be made with respect to different portions of the environment while allowing the number of pixels in transmitted images to remain constant. The playback device renders the individual images with the UV map corresponding to the resolution allocation used to generate the individual images.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: December 10, 2019
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss, Hector M Medina
  • Publication number: 20190346933
    Abstract: Methods and Apparatus for controlling, implementing and supporting trick Play in an augmented reality (AR) device are described. Detected changes in AR device orientation and/or AR device position are detected and used in controlling temporal playback operations.
    Type: Application
    Filed: May 2, 2019
    Publication date: November 14, 2019
    Inventors: Hector Medina, Alan McKay Moss, David Cole
  • Patent number: 10447994
    Abstract: Camera related methods and apparatus which are well suited for use in capturing stereoscopic image data, e.g., pairs of left and right eye images, are described. Various features relate to a camera rig which can be used to mount multiple cameras at the same time. In some embodiments the camera rig includes 3 mounting locations corresponding to 3 different directions 120 degrees apart. One or more of the mounting locations may be used at a given time. When a single camera pair is used the rig can be rotated to capture images corresponding to the locations where a camera pair is not mounted. Static images from those locations can then be combined with images corresponding to the forward direction to generate a 360 degree view. Alternatively camera pairs or individual cameras can be included in each of the mounting locations to capture video in multiple directions.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: October 15, 2019
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss
  • Publication number: 20190313075
    Abstract: An unobstructed image portion of a captured image from a first camera of a camera pair, e.g., a stereoscopic camera pair including fisheye lenses, is combined with a scaled extracted image portion generated from a captured image from a second camera in the camera pair. An unobstructed image portion of a captured image from the second camera of the camera pair is combined with a scaled extracted image portion generated from a captured image from the first camera in the camera pair. As part of the combining obstructed image portions which were obstructed by part of the adjacent camera are replaced in some embodiments. In some embodiments, the obstructions are due to adjacent fisheye lens. In various embodiments fish eye lenses which have been cut to be flat on one side are used for the left and right cameras with the spacing between the optical axis approximating the spacing between the optical axis of a human person's eyes.
    Type: Application
    Filed: June 20, 2018
    Publication date: October 10, 2019
    Inventors: Alan McKay Moss, Hector M. Medina, Ryan Michael Sheridan, Matthew Yaeger, David Ibbitson, David Cole
  • Patent number: 10438369
    Abstract: Methods and apparatus for determining location of objects surrounding a user of a 3D rendering and display system and indicating the objects to the user while the user views a simulated environment, e.g., on a headmounted display, are described. A sensor, e.g. camera, captures images or senses the physical environment where the user of the system is located. One or more objects in the physical environment are identified, e.g., by recognizing predetermined symbols on the objects and based on stored information indicating a mapping between different symbols and objects. The location of the objects relative to the user's location in the physical environment is determined. A simulated environment, including content corresponding to a scene and visual representations of the one or more objects, is displayed. In some embodiments visual representation are displayed in the simulated environment at locations determined based on the location of the objects relative to the user.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: October 8, 2019
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss
  • Publication number: 20190306487
    Abstract: Customer wide angle lenses and methods and apparatus for using such lenses in individual cameras as well as pairs of cameras intended for stereoscopic image capture are described. The lenses are used in combination with sensors to capture different portions of an environment at different resolutions. In some embodiments ground is capture at a lower resolution than sky which is captured at a lower resolution than a horizontal area of interest. Various asymmetries in lenses and/or lens and sensor placement are described which are particularly well suited for stereoscopic camera pairs where the proximity of one camera to the adjacent camera may interfere with the field of view of the cameras.
    Type: Application
    Filed: February 4, 2019
    Publication date: October 3, 2019
    Inventors: David Cole, ALAN MCKAY MOSS, Michael P. Straub
  • Patent number: 10432910
    Abstract: Methods and apparatus for allowing a user to switch viewing positions and/or perspective while viewing an environment, e.g., as part of a 3D playback/viewing experience, are described. In various embodiments images of the environment are captured using cameras placed at multiple camera positions. During viewing a user can select which camera position he/she would like to experience the environment from. While experiencing the environment from the perspective of a first camera position the user may switch from the first to a second camera position by looking at the second position. A visual indication is provided to the user to indicate that the user can select the other camera position as his/her viewing position. If a user input indicates a desired viewing position change, a switch to the alternate viewing position is made and the user is presented with images captured from the perspective of the user selected alternative viewing position.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: October 1, 2019
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss
  • Patent number: 10412382
    Abstract: Camera and/or lens calibration information is generated as part of a calibration process in video systems including 3-dimensional (3D) immersive content systems. The calibration information can be used to correct for distortions associated with the source camera and/or lens. A calibration profile can include information sufficient to allow the system to correct for camera and/or lens distortion/variation. This can be accomplished by capturing a calibration image of a physical 3D object corresponding to the simulated 3D environment, and creating the calibration profile by processing the calibration image. The calibration profile can then be used to project the source content directly into the 3D viewing space while also accounting for distortion/variation, and without first translating into an intermediate space (e.g., a rectilinear space) to account for lens distortion.
    Type: Grant
    Filed: January 15, 2018
    Date of Patent: September 10, 2019
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss, Hector M. Medina
  • Patent number: 10397538
    Abstract: Methods and apparatus for supporting the capture of images of surfaces of an environment visible from a default viewing position and capturing images of surfaces not visible from the default viewing position, e.g., occluded surfaces, are described. Occluded and non-occluded image portions are packed into one or more frames and communicated to a playback device for use as textures which can be applied to a model of the environment where the images were captured. An environmental model includes a model of surfaces which are occluded from view from a default viewing position but which maybe viewed is the user shifts the user's viewing location. Occluded image content can be incorporated directly into a frame that also includes non-occluded image data or sent in frames of a separate, e.g., auxiliary content stream that is multiplexed with the main content stream which communicates image data corresponding to non-occluded environmental portions.
    Type: Grant
    Filed: March 1, 2016
    Date of Patent: August 27, 2019
    Assignee: NEXTVR Inc.
    Inventors: David Cole, Alan McKay Moss
  • Patent number: 10397543
    Abstract: Methods and apparatus for streaming or playing back stereoscopic content are described. Camera dependent correction information is communicated to a playback device and applied in the playback device to compensate for distortions introduced by the lenses of individual cameras. By performing lens dependent distortion compensation in the playback device edges which might be lost if correction were performed prior to encoding are preserved. Distortion correction information maybe in the form of UV map correction information. The correction information may indicate changes to be made to information in a UV map, e.g., at rendering time, to compensate for distortions specific to an individual camera. Different sets of correction information maybe communicated and used for different cameras of a stereoscopic pair which provide images that are rendered using the same UV map. The communicated correction information is sometimes called a correction mesh since it is used to correct mesh related information.
    Type: Grant
    Filed: September 3, 2015
    Date of Patent: August 27, 2019
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss, Hector M Medina
  • Patent number: 10362290
    Abstract: Methods and apparatus for collecting user feedback information from viewers of content are described. Feedback information is received from viewers of content. The feedback indicates, based on head tracking information in some embodiments, where users are looking in a simulated environment during different times of a content presentation, e.g., different frame times. The feedback information is used to prioritize different portions of an environment represented by the captured image content. Resolution allocation is performed based on the feedback information and the content re-encoded based on the resolution allocation. The resolution allocation may and normally does change as the priority of different portions of the environment change.
    Type: Grant
    Filed: August 17, 2016
    Date of Patent: July 23, 2019
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss, Hector M Medina
  • Patent number: 10279925
    Abstract: A head mounted virtual reality (VR) device including an inertial measurement unit (IMU) is located in a vehicle which may be, and sometimes is, moving. Detected motion attributable to vehicle motion is filtered out based on one or more or all of: vehicle type information, information derived from sensors located in the vehicle external to the head mounted VR device, and/or captured images including a reference point or reference object within the vehicle. An image portion of a simulated VR environment is selected and presented to the user of the head mounted VR device based on the filtered motion information. Thus, the image portion presented to the user of the head mounted VR device is substantially unaffected by vehicle motion and corresponds to user induced head motion.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: May 7, 2019
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss
  • Publication number: 20190110041
    Abstract: Methods and apparatus for using selective resolution reduction on images to be transmitted and/or used by a playback device are described. Prior to transmission one or more images of an environment are captured. Based on image content, motion detection and/or user input a resolution reduction operation is selected and performed. The reduced resolution image is communicated to a playback device along with information indicating a UV map corresponding to the selected resolution allocation that should be used by the playback device for rendering the communicated image. By changing the resolution allocation used and which UV map is used by the playback device different resolution allocations can be made with respect to different portions of the environment while allowing the number of pixels in transmitted images to remain constant. The playback device renders the individual images with the UV map corresponding to the resolution allocation used to generate the individual images.
    Type: Application
    Filed: July 13, 2018
    Publication date: April 11, 2019
    Inventors: David Cole, Alan McKay Moss, Hector M Medina