Patents by Inventor Jacqueline MCFARLAND

Jacqueline MCFARLAND has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9609307
    Abstract: Machine learning method that learns to convert 2D video to 3D video from a set of training examples. Uses machine learning to perform any or all of the 2D to 3D conversion steps of identifying and locating objects, masking objects, modeling object depth, generating stereoscopic image pairs, and filling gaps created by pixel displacement for depth effects. Training examples comprise inputs and outputs for the conversion steps. The machine learning system generates transformation functions that generate the outputs from the inputs; these functions may then be used on new 2D videos to automate or semi-automate the conversion process. Operator input may be used to augment the results of the machine learning system. Illustrative representations for conversion data in the training examples include object tags to identify objects and locate their features, Bézier curves to mask object regions, and point clouds or geometric shapes to model object depth.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: March 28, 2017
    Assignee: LEGEND3D, INC.
    Inventors: Anthony Lopez, Jacqueline McFarland, Tony Baldridge
  • Publication number: 20170085863
    Abstract: Machine learning method that learns to convert 2D video to 3D video from a set of training examples. Uses machine learning to perform any or all of the 2D to 3D conversion steps of identifying and locating objects, masking objects, modeling object depth, generating stereoscopic image pairs, and filling gaps created by pixel displacement for depth effects. Training examples comprise inputs and outputs for the conversion steps. The machine learning system generates transformation functions that generate the outputs from the inputs; these functions may then be used on new 2D videos to automate or semi-automate the conversion process. Operator input may be used to augment the results of the machine learning system. Illustrative representations for conversion data in the training examples include object tags to identify objects and locate their features, Bézier curves to mask object regions, and point clouds or geometric shapes to model object depth.
    Type: Application
    Filed: December 14, 2015
    Publication date: March 23, 2017
    Applicant: LEGEND3D, INC.
    Inventors: Anthony LOPEZ, Jacqueline McFARLAND, Tony BALDRIDGE
  • Patent number: 9407904
    Abstract: A method that enables creation of a 3D virtual reality environment from a series of 2D images of a scene. Embodiments map 2D images onto a sphere to create a composite spherical image, divide the composite image into regions, and add depth information to the regions. Depth information may be generated by mapping regions onto flat or curved surfaces, and positioning these surfaces in 3D space. Some embodiments enable inserting, removing, or extending objects in the scene, adding or modifying depth information as needed. The final composite image and depth information are projected back onto one or more spheres, and then projected onto left and right eye image planes to form a 3D stereoscopic image for a viewer of the virtual reality environment. Embodiments enable 3D images to be generated dynamically for the viewer in response to changes in the viewer's position and orientation in the virtual reality environment.
    Type: Grant
    Filed: May 11, 2015
    Date of Patent: August 2, 2016
    Assignee: LEGEND3D, INC.
    Inventors: Jared Sandrew, Tony Baldridge, Jacqueline McFarland, Scott Jones, Thomas Schad
  • Patent number: 9288476
    Abstract: Enables real-time depth modifications to stereo images of a 3D virtual reality environment locally and for example without iterative workflow involving region designers or depth artists that includes re-rendering these images from the original 3D model. Embodiments generate a spherical translation map from the 3D model of the virtual environment; this spherical translation map is a function of the pixel shifts between left and right stereo images for each point of the sphere surrounding the viewer of the virtual environment. Modifications may be made directly to the spherical translation map, and applied directly to the stereo images, without requiring re-rendering of the scene from the complete 3D model. This process enables depth modifications to be viewed in real-time, greatly improving the efficiency of the 3D model creation, review, and update cycle.
    Type: Grant
    Filed: August 17, 2015
    Date of Patent: March 15, 2016
    Assignee: LEGEND3D, INC.
    Inventors: Jared Sandrew, Tony Baldridge, Anthony Lopez, Jacqueline McFarland, Scott Jones, Thomas Schad
  • Patent number: 9282321
    Abstract: A system that coordinates multiple simultaneous reviews of a 3D model, potentially from different viewpoints. Embodiments support multiple reviewers using review stations that render images of the model based on the pose of the reviewer. For example, multiple reviewers may use virtual reality headsets to observe a 3D virtual environment from different orientations. A coordinator uses a coordinator station to observe the entire 3D model and the viewpoints of each of the reviewers in this 3D model. Embodiments may support reviewer designation of regions within the model that require review or modification; these designated regions are also displayed on the coordinator station. Embodiments may support real time updates to the 3D model and propagation of updated images to the coordinator and to the multiple viewers.
    Type: Grant
    Filed: August 17, 2015
    Date of Patent: March 8, 2016
    Assignee: LEGEND3D, INC.
    Inventors: Jared Sandrew, Tony Baldridge, Anthony Lopez, Jacqueline McFarland, Scott Jones, Thomas Schad
  • Publication number: 20150358613
    Abstract: A system that coordinates multiple simultaneous reviews of a 3D model, potentially from different viewpoints. Embodiments support multiple reviewers using review stations that render images of the model based on the pose of the reviewer. For example, multiple reviewers may use virtual reality headsets to observe a 3D virtual environment from different orientations. A coordinator uses a coordinator station to observe the entire 3D model and the viewpoints of each of the reviewers in this 3D model. Embodiments may support reviewer designation of regions within the model that require review or modification; these designated regions are also displayed on the coordinator station. Embodiments may support real time updates to the 3D model and propagation of updated images to the coordinator and to the multiple viewers.
    Type: Application
    Filed: August 17, 2015
    Publication date: December 10, 2015
    Applicant: LEGEND3D, INC.
    Inventors: Jared SANDREW, Tony BALDRIDGE, Anthony LOPEZ, Jacqueline MCFARLAND, Scott JONES, Thomas SCHAD
  • Publication number: 20150358612
    Abstract: Enables real-time depth modifications to stereo images of a 3D virtual reality environment locally and for example without iterative workflow involving region designers or depth artists that includes re-rendering these images from the original 3D model. Embodiments generate a spherical translation map from the 3D model of the virtual environment; this spherical translation map is a function of the pixel shifts between left and right stereo images for each point of the sphere surrounding the viewer of the virtual environment. Modifications may be made directly to the spherical translation map, and applied directly to the stereo images, without requiring re-rendering of the scene from the complete 3D model. This process enables depth modifications to be viewed in real-time, greatly improving the efficiency of the 3D model creation, review, and update cycle.
    Type: Application
    Filed: August 17, 2015
    Publication date: December 10, 2015
    Applicant: LEGEND3D, INC.
    Inventors: Jared SANDREW, Tony BALDRIDGE, Anthony LOPEZ, Jacqueline MCFARLAND, Scott JONES, Thomas SCHAD
  • Publication number: 20150249815
    Abstract: A method that enables creation of a 3D virtual reality environment from a series of 2D images of a scene. Embodiments map 2D images onto a sphere to create a composite spherical image, divide the composite image into regions, and add depth information to the regions. Depth information may be generated by mapping regions onto flat or curved surfaces, and positioning these surfaces in 3D space. Some embodiments enable inserting, removing, or extending objects in the scene, adding or modifying depth information as needed. The final composite image and depth information are projected back onto one or more spheres, and then projected onto left and right eye image planes to form a 3D stereoscopic image for a viewer of the virtual reality environment. Embodiments enable 3D images to be generated dynamically for the viewer in response to changes in the viewer's position and orientation in the virtual reality environment.
    Type: Application
    Filed: May 11, 2015
    Publication date: September 3, 2015
    Applicant: LEGEND3D, INC.
    Inventors: Jared SANDREW, Tony BALDRIDGE, Jacqueline MCFARLAND, Scott JONES, Thomas SCHAD