Patents by Inventor Thomas Schad

Thomas Schad has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9438878
    Abstract: Method for converting 2D video to 3D video using 3D object models. Embodiments of the invention obtain a 3D object model for one or more objects in a 2D video scene, such as a character. Object models may for example be derived from 3D scanner data; planes, polygons, or surfaces may be fit to this data to generate a 3D model. In each frame in which a modeled object appears, the location and orientation of the 3D model may be determined in the frame, and a depth map for the object may be generated from the model. 3D video may be generated using the depth map. Embodiments may use feature tracking to automatically determine object location and orientation. Embodiments may use rigged 3D models with degrees of freedom to model objects with parts that move relative to one another.
    Type: Grant
    Filed: September 17, 2015
    Date of Patent: September 6, 2016
    Assignee: LEGEND3D, Inc.
    Inventors: Vicente Niebla, Jr., Tony Baldridge, Thomas Schad, Scott Jones
  • Patent number: 9407904
    Abstract: A method that enables creation of a 3D virtual reality environment from a series of 2D images of a scene. Embodiments map 2D images onto a sphere to create a composite spherical image, divide the composite image into regions, and add depth information to the regions. Depth information may be generated by mapping regions onto flat or curved surfaces, and positioning these surfaces in 3D space. Some embodiments enable inserting, removing, or extending objects in the scene, adding or modifying depth information as needed. The final composite image and depth information are projected back onto one or more spheres, and then projected onto left and right eye image planes to form a 3D stereoscopic image for a viewer of the virtual reality environment. Embodiments enable 3D images to be generated dynamically for the viewer in response to changes in the viewer's position and orientation in the virtual reality environment.
    Type: Grant
    Filed: May 11, 2015
    Date of Patent: August 2, 2016
    Assignee: LEGEND3D, INC.
    Inventors: Jared Sandrew, Tony Baldridge, Jacqueline McFarland, Scott Jones, Thomas Schad
  • Patent number: 9288476
    Abstract: Enables real-time depth modifications to stereo images of a 3D virtual reality environment locally and for example without iterative workflow involving region designers or depth artists that includes re-rendering these images from the original 3D model. Embodiments generate a spherical translation map from the 3D model of the virtual environment; this spherical translation map is a function of the pixel shifts between left and right stereo images for each point of the sphere surrounding the viewer of the virtual environment. Modifications may be made directly to the spherical translation map, and applied directly to the stereo images, without requiring re-rendering of the scene from the complete 3D model. This process enables depth modifications to be viewed in real-time, greatly improving the efficiency of the 3D model creation, review, and update cycle.
    Type: Grant
    Filed: August 17, 2015
    Date of Patent: March 15, 2016
    Assignee: LEGEND3D, INC.
    Inventors: Jared Sandrew, Tony Baldridge, Anthony Lopez, Jacqueline McFarland, Scott Jones, Thomas Schad
  • Patent number: 9282321
    Abstract: A system that coordinates multiple simultaneous reviews of a 3D model, potentially from different viewpoints. Embodiments support multiple reviewers using review stations that render images of the model based on the pose of the reviewer. For example, multiple reviewers may use virtual reality headsets to observe a 3D virtual environment from different orientations. A coordinator uses a coordinator station to observe the entire 3D model and the viewpoints of each of the reviewers in this 3D model. Embodiments may support reviewer designation of regions within the model that require review or modification; these designated regions are also displayed on the coordinator station. Embodiments may support real time updates to the 3D model and propagation of updated images to the coordinator and to the multiple viewers.
    Type: Grant
    Filed: August 17, 2015
    Date of Patent: March 8, 2016
    Assignee: LEGEND3D, INC.
    Inventors: Jared Sandrew, Tony Baldridge, Anthony Lopez, Jacqueline McFarland, Scott Jones, Thomas Schad
  • Patent number: 9241147
    Abstract: External depth map transformation method for conversion of two-dimensional images to stereoscopic images that provides increased artistic and technical flexibility and rapid conversion of movies for stereoscopic viewing. Embodiments of the invention convert a large set of highly granular depth information inherent in a depth map associated with a two-dimensional image to a smaller set of rotated planes associated with masked areas in the image. This enables the planes to be manipulated independently or as part of a group and eliminates many problems associated with importing external depth maps including minimization of errors that frequently exist in external depth maps.
    Type: Grant
    Filed: May 1, 2013
    Date of Patent: January 19, 2016
    Assignee: LEGEND3D, INC.
    Inventors: Matthew DeJohn, Thomas Schad, Scott Jones
  • Publication number: 20160005228
    Abstract: Method for converting 2D video to 3D video using 3D object models. Embodiments of the invention obtain a 3D object model for one or more objects in a 2D video scene, such as a character. Object models may for example be derived from 3D scanner data; planes, polygons, or surfaces may be fit to this data to generate a 3D model. In each frame in which a modeled object appears, the location and orientation of the 3D model may be determined in the frame, and a depth map for the object may be generated from the model. 3D video may be generated using the depth map. Embodiments may use feature tracking to automatically determine object location and orientation. Embodiments may use rigged 3D models with degrees of freedom to model objects with parts that move relative to one another.
    Type: Application
    Filed: September 17, 2015
    Publication date: January 7, 2016
    Applicant: LEGEND3D, Inc.
    Inventors: Vicente NIEBLA, JR., Tony BALDRIDGE, Thomas SCHAD, Scott JONES
  • Publication number: 20150358613
    Abstract: A system that coordinates multiple simultaneous reviews of a 3D model, potentially from different viewpoints. Embodiments support multiple reviewers using review stations that render images of the model based on the pose of the reviewer. For example, multiple reviewers may use virtual reality headsets to observe a 3D virtual environment from different orientations. A coordinator uses a coordinator station to observe the entire 3D model and the viewpoints of each of the reviewers in this 3D model. Embodiments may support reviewer designation of regions within the model that require review or modification; these designated regions are also displayed on the coordinator station. Embodiments may support real time updates to the 3D model and propagation of updated images to the coordinator and to the multiple viewers.
    Type: Application
    Filed: August 17, 2015
    Publication date: December 10, 2015
    Applicant: LEGEND3D, INC.
    Inventors: Jared SANDREW, Tony BALDRIDGE, Anthony LOPEZ, Jacqueline MCFARLAND, Scott JONES, Thomas SCHAD
  • Publication number: 20150358612
    Abstract: Enables real-time depth modifications to stereo images of a 3D virtual reality environment locally and for example without iterative workflow involving region designers or depth artists that includes re-rendering these images from the original 3D model. Embodiments generate a spherical translation map from the 3D model of the virtual environment; this spherical translation map is a function of the pixel shifts between left and right stereo images for each point of the sphere surrounding the viewer of the virtual environment. Modifications may be made directly to the spherical translation map, and applied directly to the stereo images, without requiring re-rendering of the scene from the complete 3D model. This process enables depth modifications to be viewed in real-time, greatly improving the efficiency of the 3D model creation, review, and update cycle.
    Type: Application
    Filed: August 17, 2015
    Publication date: December 10, 2015
    Applicant: LEGEND3D, INC.
    Inventors: Jared SANDREW, Tony BALDRIDGE, Anthony LOPEZ, Jacqueline MCFARLAND, Scott JONES, Thomas SCHAD
  • Publication number: 20150249815
    Abstract: A method that enables creation of a 3D virtual reality environment from a series of 2D images of a scene. Embodiments map 2D images onto a sphere to create a composite spherical image, divide the composite image into regions, and add depth information to the regions. Depth information may be generated by mapping regions onto flat or curved surfaces, and positioning these surfaces in 3D space. Some embodiments enable inserting, removing, or extending objects in the scene, adding or modifying depth information as needed. The final composite image and depth information are projected back onto one or more spheres, and then projected onto left and right eye image planes to form a 3D stereoscopic image for a viewer of the virtual reality environment. Embodiments enable 3D images to be generated dynamically for the viewer in response to changes in the viewer's position and orientation in the virtual reality environment.
    Type: Application
    Filed: May 11, 2015
    Publication date: September 3, 2015
    Applicant: LEGEND3D, INC.
    Inventors: Jared SANDREW, Tony BALDRIDGE, Jacqueline MCFARLAND, Scott JONES, Thomas SCHAD
  • Patent number: 9007404
    Abstract: A tilt-based look around effect image enhancement method that enables two-dimensional images to be depth enhanced and displayed for example from a different point of view based on the tilt or orientation, and/or movement of the viewing device itself. Embodiments may further alter or otherwise utilize different parallax maps to apply depth to the image based on the display type, e.g., two-dimensional or stereoscopic display, and may dynamically utilize both algorithms on a display capable of both types of display. In addition, embodiments may display information foreign to the image when portions of the image are exposed during the look around effect, including advertisements, game information, hyperlinks or any other data not originally in the image.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: April 14, 2015
    Assignee: Legend3D, Inc.
    Inventors: Matt DeJohn, Jared Sandrew, Thomas Schad, Scott Jones
  • Publication number: 20140327736
    Abstract: External depth map transformation method for conversion of two-dimensional images to stereoscopic images that provides increased artistic and technical flexibility and rapid conversion of movies for stereoscopic viewing. Embodiments of the invention convert a large set of highly granular depth information inherent in a depth map associated with a two-dimensional image to a smaller set of rotated planes associated with masked areas in the image. This enables the planes to be manipulated independently or as part of a group and eliminates many problems associated with importing external depth maps including minimization of errors that frequently exist in external depth maps.
    Type: Application
    Filed: May 1, 2013
    Publication date: November 6, 2014
    Applicant: LEGEND3D, INC.
    Inventors: Matthew DeJOHN, Thomas Schad, Scott Jones
  • Publication number: 20140267235
    Abstract: A tilt-based look around effect image enhancement method that enables two-dimensional images to be depth enhanced and displayed for example from a different point of view based on the tilt or orientation, and/or movement of the viewing device itself. Embodiments may further alter or otherwise utilize different parallax maps to apply depth to the image based on the display type, e.g., two-dimensional or stereoscopic display, and may dynamically utilize both algorithms on a display capable of both types of display. In addition, embodiments may display information foreign to the image when portions of the image are exposed during the look around effect, including advertisements, game information, hyperlinks or any other data not originally in the image.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Applicant: LEGEND3D, INC.
    Inventors: Matt DeJohn, Jared Sandrew, Thomas Schad, Scott Jones