Patents by Inventor Tony Baldridge
Tony Baldridge has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9615082Abstract: The system enables conversion of black and white images to color images and/or two-dimensional images into three-dimensional images based on adding color and/or depth to images using masks for regions in the images, as well as reshaping of masks to cover objects that have moved and changed shape as the objects move in a sequence of images. Also, includes motion picture project management system for reviewers, coordinators and artists. Artists utilize image analysis and image enhancement and computer graphics processing for example to convert two-dimensional images into three-dimensional images or otherwise create or alter motion pictures. Enables the efficient management of projects related to motion pictures to enable enterprises to manage assets, control costs, predict budgets and profit margins, reduce archival storage and otherwise provide displays tailored to specific roles to increase worker efficiency.Type: GrantFiled: March 14, 2016Date of Patent: April 4, 2017Assignee: LEGEND3D, Inc.Inventors: Barry Sandrew, Anthony Lopez, Jared Sandrew, Tony Baldridge
-
Patent number: 9609307Abstract: Machine learning method that learns to convert 2D video to 3D video from a set of training examples. Uses machine learning to perform any or all of the 2D to 3D conversion steps of identifying and locating objects, masking objects, modeling object depth, generating stereoscopic image pairs, and filling gaps created by pixel displacement for depth effects. Training examples comprise inputs and outputs for the conversion steps. The machine learning system generates transformation functions that generate the outputs from the inputs; these functions may then be used on new 2D videos to automate or semi-automate the conversion process. Operator input may be used to augment the results of the machine learning system. Illustrative representations for conversion data in the training examples include object tags to identify objects and locate their features, Bézier curves to mask object regions, and point clouds or geometric shapes to model object depth.Type: GrantFiled: December 14, 2015Date of Patent: March 28, 2017Assignee: LEGEND3D, INC.Inventors: Anthony Lopez, Jacqueline McFarland, Tony Baldridge
-
Publication number: 20170085863Abstract: Machine learning method that learns to convert 2D video to 3D video from a set of training examples. Uses machine learning to perform any or all of the 2D to 3D conversion steps of identifying and locating objects, masking objects, modeling object depth, generating stereoscopic image pairs, and filling gaps created by pixel displacement for depth effects. Training examples comprise inputs and outputs for the conversion steps. The machine learning system generates transformation functions that generate the outputs from the inputs; these functions may then be used on new 2D videos to automate or semi-automate the conversion process. Operator input may be used to augment the results of the machine learning system. Illustrative representations for conversion data in the training examples include object tags to identify objects and locate their features, Bézier curves to mask object regions, and point clouds or geometric shapes to model object depth.Type: ApplicationFiled: December 14, 2015Publication date: March 23, 2017Applicant: LEGEND3D, INC.Inventors: Anthony LOPEZ, Jacqueline McFARLAND, Tony BALDRIDGE
-
Patent number: 9438878Abstract: Method for converting 2D video to 3D video using 3D object models. Embodiments of the invention obtain a 3D object model for one or more objects in a 2D video scene, such as a character. Object models may for example be derived from 3D scanner data; planes, polygons, or surfaces may be fit to this data to generate a 3D model. In each frame in which a modeled object appears, the location and orientation of the 3D model may be determined in the frame, and a depth map for the object may be generated from the model. 3D video may be generated using the depth map. Embodiments may use feature tracking to automatically determine object location and orientation. Embodiments may use rigged 3D models with degrees of freedom to model objects with parts that move relative to one another.Type: GrantFiled: September 17, 2015Date of Patent: September 6, 2016Assignee: LEGEND3D, Inc.Inventors: Vicente Niebla, Jr., Tony Baldridge, Thomas Schad, Scott Jones
-
Patent number: 9407904Abstract: A method that enables creation of a 3D virtual reality environment from a series of 2D images of a scene. Embodiments map 2D images onto a sphere to create a composite spherical image, divide the composite image into regions, and add depth information to the regions. Depth information may be generated by mapping regions onto flat or curved surfaces, and positioning these surfaces in 3D space. Some embodiments enable inserting, removing, or extending objects in the scene, adding or modifying depth information as needed. The final composite image and depth information are projected back onto one or more spheres, and then projected onto left and right eye image planes to form a 3D stereoscopic image for a viewer of the virtual reality environment. Embodiments enable 3D images to be generated dynamically for the viewer in response to changes in the viewer's position and orientation in the virtual reality environment.Type: GrantFiled: May 11, 2015Date of Patent: August 2, 2016Assignee: LEGEND3D, INC.Inventors: Jared Sandrew, Tony Baldridge, Jacqueline McFarland, Scott Jones, Thomas Schad
-
Publication number: 20160198142Abstract: The system enables conversion of black and white images to color images and/or two-dimensional images into three-dimensional images based on adding color and/or depth to images using masks for regions in the images, as well as reshaping of masks to cover objects that have moved and changed shape as the objects move in a sequence of images. Also, includes motion picture project management system for reviewers, coordinators and artists. Artists utilize image analysis and image enhancement and computer graphics processing for example to convert two-dimensional images into three-dimensional images or otherwise create or alter motion pictures. Enables the efficient management of projects related to motion pictures to enable enterprises to manage assets, control costs, predict budgets and profit margins, reduce archival storage and otherwise provide displays tailored to specific roles to increase worker efficiency.Type: ApplicationFiled: March 14, 2016Publication date: July 7, 2016Applicant: LEGEND3D, Inc.Inventors: Barry SANDREW, Anthony LOPEZ, Jared SANDREW, Tony BALDRIDGE
-
Patent number: 9286941Abstract: The system enables conversion of black and white images to color images and/or two-dimensional images into three-dimensional images based on adding color and/or depth to images using masks for regions in the images, as well as reshaping of masks to cover objects that have moved and changed shape as the objects move in a sequence of images. Also, includes motion picture project management system for reviewers, coordinators and artists. Artists utilize image analysis and image enhancement and computer graphics processing for example to convert two-dimensional images into three-dimensional images or otherwise create or alter motion pictures. Enables the efficient management of projects related to motion pictures to enable enterprises to manage assets, control costs, predict budgets and profit margins, reduce archival storage and otherwise provide displays tailored to specific roles to increase worker efficiency.Type: GrantFiled: May 11, 2015Date of Patent: March 15, 2016Assignee: LEGEND3D, Inc.Inventors: Barry Sandrew, Tony Baldridge, Jared Sandrew, Anthony Lopez
-
System and method for real-time depth modification of stereo images of a virtual reality environment
Patent number: 9288476Abstract: Enables real-time depth modifications to stereo images of a 3D virtual reality environment locally and for example without iterative workflow involving region designers or depth artists that includes re-rendering these images from the original 3D model. Embodiments generate a spherical translation map from the 3D model of the virtual environment; this spherical translation map is a function of the pixel shifts between left and right stereo images for each point of the sphere surrounding the viewer of the virtual environment. Modifications may be made directly to the spherical translation map, and applied directly to the stereo images, without requiring re-rendering of the scene from the complete 3D model. This process enables depth modifications to be viewed in real-time, greatly improving the efficiency of the 3D model creation, review, and update cycle.Type: GrantFiled: August 17, 2015Date of Patent: March 15, 2016Assignee: LEGEND3D, INC.Inventors: Jared Sandrew, Tony Baldridge, Anthony Lopez, Jacqueline McFarland, Scott Jones, Thomas Schad -
Patent number: 9282321Abstract: A system that coordinates multiple simultaneous reviews of a 3D model, potentially from different viewpoints. Embodiments support multiple reviewers using review stations that render images of the model based on the pose of the reviewer. For example, multiple reviewers may use virtual reality headsets to observe a 3D virtual environment from different orientations. A coordinator uses a coordinator station to observe the entire 3D model and the viewpoints of each of the reviewers in this 3D model. Embodiments may support reviewer designation of regions within the model that require review or modification; these designated regions are also displayed on the coordinator station. Embodiments may support real time updates to the 3D model and propagation of updated images to the coordinator and to the multiple viewers.Type: GrantFiled: August 17, 2015Date of Patent: March 8, 2016Assignee: LEGEND3D, INC.Inventors: Jared Sandrew, Tony Baldridge, Anthony Lopez, Jacqueline McFarland, Scott Jones, Thomas Schad
-
Publication number: 20160005228Abstract: Method for converting 2D video to 3D video using 3D object models. Embodiments of the invention obtain a 3D object model for one or more objects in a 2D video scene, such as a character. Object models may for example be derived from 3D scanner data; planes, polygons, or surfaces may be fit to this data to generate a 3D model. In each frame in which a modeled object appears, the location and orientation of the 3D model may be determined in the frame, and a depth map for the object may be generated from the model. 3D video may be generated using the depth map. Embodiments may use feature tracking to automatically determine object location and orientation. Embodiments may use rigged 3D models with degrees of freedom to model objects with parts that move relative to one another.Type: ApplicationFiled: September 17, 2015Publication date: January 7, 2016Applicant: LEGEND3D, Inc.Inventors: Vicente NIEBLA, JR., Tony BALDRIDGE, Thomas SCHAD, Scott JONES
-
SYSTEM AND METHOD FOR REAL-TIME DEPTH MODIFICATION OF STEREO IMAGES OF A VIRTUAL REALITY ENVIRONMENT
Publication number: 20150358612Abstract: Enables real-time depth modifications to stereo images of a 3D virtual reality environment locally and for example without iterative workflow involving region designers or depth artists that includes re-rendering these images from the original 3D model. Embodiments generate a spherical translation map from the 3D model of the virtual environment; this spherical translation map is a function of the pixel shifts between left and right stereo images for each point of the sphere surrounding the viewer of the virtual environment. Modifications may be made directly to the spherical translation map, and applied directly to the stereo images, without requiring re-rendering of the scene from the complete 3D model. This process enables depth modifications to be viewed in real-time, greatly improving the efficiency of the 3D model creation, review, and update cycle.Type: ApplicationFiled: August 17, 2015Publication date: December 10, 2015Applicant: LEGEND3D, INC.Inventors: Jared SANDREW, Tony BALDRIDGE, Anthony LOPEZ, Jacqueline MCFARLAND, Scott JONES, Thomas SCHAD -
Publication number: 20150358613Abstract: A system that coordinates multiple simultaneous reviews of a 3D model, potentially from different viewpoints. Embodiments support multiple reviewers using review stations that render images of the model based on the pose of the reviewer. For example, multiple reviewers may use virtual reality headsets to observe a 3D virtual environment from different orientations. A coordinator uses a coordinator station to observe the entire 3D model and the viewpoints of each of the reviewers in this 3D model. Embodiments may support reviewer designation of regions within the model that require review or modification; these designated regions are also displayed on the coordinator station. Embodiments may support real time updates to the 3D model and propagation of updated images to the coordinator and to the multiple viewers.Type: ApplicationFiled: August 17, 2015Publication date: December 10, 2015Applicant: LEGEND3D, INC.Inventors: Jared SANDREW, Tony BALDRIDGE, Anthony LOPEZ, Jacqueline MCFARLAND, Scott JONES, Thomas SCHAD
-
Publication number: 20150249815Abstract: A method that enables creation of a 3D virtual reality environment from a series of 2D images of a scene. Embodiments map 2D images onto a sphere to create a composite spherical image, divide the composite image into regions, and add depth information to the regions. Depth information may be generated by mapping regions onto flat or curved surfaces, and positioning these surfaces in 3D space. Some embodiments enable inserting, removing, or extending objects in the scene, adding or modifying depth information as needed. The final composite image and depth information are projected back onto one or more spheres, and then projected onto left and right eye image planes to form a 3D stereoscopic image for a viewer of the virtual reality environment. Embodiments enable 3D images to be generated dynamically for the viewer in response to changes in the viewer's position and orientation in the virtual reality environment.Type: ApplicationFiled: May 11, 2015Publication date: September 3, 2015Applicant: LEGEND3D, INC.Inventors: Jared SANDREW, Tony BALDRIDGE, Jacqueline MCFARLAND, Scott JONES, Thomas SCHAD
-
Publication number: 20150243324Abstract: The system enables conversion of black and white images to color images and/or two-dimensional images into three-dimensional images based on adding color and/or depth to images using masks for regions in the images, as well as reshaping of masks to cover objects that have moved and changed shape as the objects move in a sequence of images. Also, includes motion picture project management system for reviewers, coordinators and artists. Artists utilize image analysis and image enhancement and computer graphics processing for example to convert two-dimensional images into three-dimensional images or otherwise create or alter motion pictures. Enables the efficient management of projects related to motion pictures to enable enterprises to manage assets, control costs, predict budgets and profit margins, reduce archival storage and otherwise provide displays tailored to specific roles to increase worker efficiency.Type: ApplicationFiled: May 11, 2015Publication date: August 27, 2015Applicant: LEGEND3D, Inc.Inventors: Barry SANDREW, Tony BALDRIDGE, Jared SANDREW, Anthony LOPEZ
-
Patent number: 9007365Abstract: A line depth augmentation system and method for conversion of 2D images to 3D images. Enables adding depth to regions by altering depth of lines in the regions, for example in cell animation images or regions of limited color range. Eliminates creation of wireframe or other depth models and complex modeling of regions to match the depth of lines therein. Enables rapid conversion of two-dimensional images to three-dimensional images by enabling stereographers to quickly add/alter line depth without artifacts in images for example lines in monochrome regions. Embodiments may output a stereoscopic image pair of images with lines having desired depth, or any other three-dimensional viewing enabled image, such as an anaglyph image. Although the lines may be of a different depth than the region they appear in, the human mind interprets the monochromatic region as having depth associated with the line.Type: GrantFiled: November 27, 2012Date of Patent: April 14, 2015Assignee: Legend3D, Inc.Inventors: Barry Sandrew, Jared Sandrew, Jill Hunt, Tony Baldridge, James Prola
-
Patent number: 8953905Abstract: Movies to be colorized/depth enhanced (2D?3D) are broken into backgrounds/sets or motion/onscreen-action. Background and motion elements are combined into composite frame which becomes a visual reference database that includes data for all frame offsets used later for the computer controlled application of masks within a sequence of frames. Masks are applied to subsequent frames of motion objects based on various differentiating image processing methods, including automated mask fitting/reshaping. Colors/depths are automatically applied with masks throughout a scene from the composite background and to motion objects. Areas never exposed by motion or foreground objects may be partially or fully realistically drawn/rendered/applied to the occluded areas and applied throughout the images to generate artifact-free secondary viewpoints during 2D?3D conversion.Type: GrantFiled: June 7, 2012Date of Patent: February 10, 2015Assignee: Legend3D, Inc.Inventors: Jared Sandrew, Tony Baldridge, Anthony Lopez, Timothy Tranquill, Craig Cesareo
-
Patent number: 8897596Abstract: Motion picture scenes to be colorized/depth enhanced (2D?3D) are broken into separate elements, backgrounds/sets or motion/onscreen-action. Background and motion elements are combined into composite frame which becomes a visual reference database that includes data for all frame offsets used later for the computer controlled application of masks within a sequence of frames. Masks are applied to subsequent frames of motion objects based on various differentiating image processing methods, including automated mask fitting/reshaping. Colors and/or depths are automatically applied to masks throughout a scene from the composite background, translucent, motion objects.Type: GrantFiled: February 6, 2012Date of Patent: November 25, 2014Assignee: Legend3D, Inc.Inventors: Charles Passmore, Tony Baldridge, Barry Sandrew
-
Publication number: 20140146037Abstract: A line depth augmentation system and method for conversion of 2D images to 3D images. Enables adding depth to regions by altering depth of lines in the regions, for example in cell animation images or regions of limited color range. Eliminates creation of wireframe or other depth models and complex modeling of regions to match the depth of lines therein. Enables rapid conversion of two-dimensional images to three-dimensional images by enabling stereographers to quickly add/alter line depth without artifacts in images for example lines in monochrome regions. Embodiments may output a stereoscopic image pair of images with lines having desired depth, or any other three-dimensional viewing enabled image, such as an anaglyph image. Although the lines may be of a different depth than the region they appear in, the human mind interprets the monochromatic region as having depth associated with the line.Type: ApplicationFiled: November 27, 2012Publication date: May 29, 2014Applicant: LEGEND 3D, INC.Inventors: Barry Sandrew, Jared Sandrew, Jill Hunt, Tony Baldridge, James Prola
-
Patent number: 8401336Abstract: Motion picture scenes to be colorized/depth enhanced (2D->3D) are broken into separate elements, backgrounds/sets or motion/onscreen-action. Background and motion elements are combined into composite frame which becomes a visual reference database that includes data for all frame offsets used later for the computer controlled application of masks within a sequence of frames. Masks are applied to subsequent frames of motion objects based on various differentiating image processing methods, including automated mask fitting/reshaping. Colors and/or depths are automatically applied to masks throughout a scene from the composite background and to motion objects.Type: GrantFiled: December 22, 2010Date of Patent: March 19, 2013Assignee: Legend3D, Inc.Inventors: Tony Baldridge, Barry Sandrew
-
Patent number: 8396328Abstract: Motion picture scenes to be colorized/depth enhanced (2D->3D) are broken into separate elements, backgrounds/sets or motion/onscreen-action. Background and motion elements are combined into composite frame which becomes a visual reference database that includes data for all frame offsets used later for the computer controlled application of masks within a sequence of frames. Masks are applied to subsequent frames of motion objects based on various differentiating image processing methods, including automated mask fitting/reshaping. Colors and/or depths are automatically applied to masks throughout a scene from the composite background and to motion objects.Type: GrantFiled: October 27, 2010Date of Patent: March 12, 2013Assignee: Legend3D, Inc.Inventors: Barry Sandrew, Tony Baldridge