Patents by Inventor Tobias Anderberg
Tobias Anderberg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250069346Abstract: Tracking a control object on a stage for a virtual production, including: tracking a control object on the stage with a system that tracks at least one camera used in the virtual production; identifying a location in a virtual environment using tracking information of the control object; and placing virtual assets at the identified location in the virtual environment.Type: ApplicationFiled: August 24, 2023Publication date: February 27, 2025Inventors: Tobias Anderberg, Joseph Wise
-
Patent number: 12206838Abstract: Reducing data used during capture in a physical capture volume by selectively activating image capture devices from a virtual view, including: setting up a virtual camera to receive information about and visualize the physical capture volume and a plurality of image capture devices in the virtual view; providing, to the virtual camera, the virtual view of the physical capture volume with a capability to move around the physical capture volume and activate or deactivate each of the plurality of image capture devices; calculating a view frustum, wherein the view frustum is a region of 3-D space within the physical capture volume that would appear on a view screen of the virtual camera; and defining the view frustum of the virtual camera which intersects with the plurality of image capture devices defined in the virtual view.Type: GrantFiled: September 24, 2021Date of Patent: January 21, 2025Assignees: Sony Group Corporation, Sony Pictures Entertainment, Inc.Inventors: Tobias Anderberg, David Bailey
-
Patent number: 12094309Abstract: Managing cameras and controlling video display, including: identifying a first group of cameras, each camera in the first group having a respective image stream; identifying a second group of cameras, each camera in the second group having a respective image stream, and each camera in the second group is not in the first group; assigning a name to each group; assigning a name to each camera; displaying the name of each group in a user interface on a computer system; displaying the name of each camera in the user interface; receiving a selection of a group through the user interface; displaying the image stream for each camera in the selected group simultaneously; receiving a selection of one camera through the user interface; and displaying the image stream for the selected camera.Type: GrantFiled: December 10, 2020Date of Patent: September 17, 2024Assignees: Sony Group Corporation, Sony Pictures Entertainment Inc.Inventors: Tobias Anderberg, Scott Metzger
-
Patent number: 11734873Abstract: Capturing and visualizing video, including: capturing video data using a plurality of cameras; sending the captured video data to a first shader; calculating depth information at the first shader using the captured video data; generating a three-dimensional (3-D) point cloud using the depth information; and rendering a visualization image using the 3-D point cloud.Type: GrantFiled: May 26, 2020Date of Patent: August 22, 2023Assignees: Sony Group Corporation, Sony Pictures Entertainment Inc.Inventor: Tobias Anderberg
-
Patent number: 11729507Abstract: Automatically shifting between virtual lens parameters and optical lens parameters for an optical camera positioned within a physical scene, including: providing a spatial awareness of the physical scene including position and orientation of a display screen to the optical camera; creating a feedback loop between the optical camera and lens to enable the lens to communicate lens settings to a scene registration; determining when focus of the lens is beyond an optical focal limit from the lens settings; and enabling the lens, by the scene registration, to automatically shift from using the optical lens parameters to using the virtual lens parameters when the focus of the lens moves beyond the optical focal limit.Type: GrantFiled: August 11, 2021Date of Patent: August 15, 2023Assignees: Sony Group Corporation, Sony Pictures Entertainment Inc.Inventor: Tobias Anderberg
-
Patent number: 11625861Abstract: Enabling colorization and color adjustments on 3D point clouds, which are projected onto a 2D view with an equirectangular projection. A user may color regions on the 2D view and preview the changes immediately in a 3D view of the point cloud. Embodiments render the color of each point in the point cloud by testing whether the 2D projection of the point is inside the colored region. Applications may include generation of a color 3D virtual reality environment using point clouds and color-adjusted imagery.Type: GrantFiled: November 23, 2020Date of Patent: April 11, 2023Assignees: Sony Group Corporation, Sony Pictures Entertainment Inc.Inventor: Tobias Anderberg
-
Publication number: 20230101991Abstract: Reducing data used during capture in a physical capture volume by selectively activating image capture devices from a virtual view, including: setting up a virtual camera to receive information about and visualize the physical capture volume and a plurality of image capture devices in the virtual view; providing, to the virtual camera, the virtual view of the physical capture volume with a capability to move around the physical capture volume and activate or deactivate each of the plurality of image capture devices; calculating a view frustum, wherein the view frustum is a region of 3-D space within the physical capture volume that would appear on a view screen of the virtual camera; and defining the view frustum of the virtual camera which intersects with the plurality of image capture devices defined in the virtual view.Type: ApplicationFiled: September 24, 2021Publication date: March 30, 2023Inventors: Tobias Anderberg, David Bailey
-
Patent number: 11544903Abstract: Managing volumetric data, including: defining a view volume in a volume of space, wherein the volumetric data has multiple points in the volume of space and at least one point is in the view volume and at least one point is not in the view volume; defining a grid in the volume of space, the grid having multiple cells and dividing the volume of space into respective cells, wherein each point has a corresponding cell in the grid, and each cell in the grid has zero or more corresponding points; and reducing the number of points for a cell in the grid where that cell is outside the view volume.Type: GrantFiled: May 15, 2020Date of Patent: January 3, 2023Assignees: Sony Group Corporation, Sony Pictures Entertainment Inc.Inventors: Brad Hunt, Tobias Anderberg
-
Patent number: 11438497Abstract: Managing device settings on multiple devices, including: creating a first group of devices including two or more devices, wherein each device in the first group has at least one first device setting; creating a second group of devices including two or more devices, wherein each device in the second group has at least one second device setting and each device in the second group is not in the first group; sending the at least one first device setting to each device in the first group of devices in parallel, so that each device in the first group changes at least one device setting according to the at least one received first device setting; and sending the at least one second device setting to each device in the second group of devices in parallel, so that each device in the second group changes at least one device setting according to the at least one received second device setting.Type: GrantFiled: May 14, 2020Date of Patent: September 6, 2022Assignees: Sony Group Corporation, Sony Pictures Entertainment, Inc.Inventors: Tobias Anderberg, Scott Metzger
-
Publication number: 20220264072Abstract: Machine vision and control including: instructing a plurality of imaging devices to capture or detect images to calibrate the plurality of imaging devices; recognizing the captured or detected images using machine vision or image recognition; determining a configuration of the plurality of imaging devices using the recognized images; and adjusting, positioning, aligning, and calibrating the plurality of imaging devices using the determined configuration.Type: ApplicationFiled: August 18, 2021Publication date: August 18, 2022Inventors: Tobias Anderberg, David Bailey
-
Publication number: 20220264016Abstract: Automatically shifting between virtual lens parameters and optical lens parameters for an optical camera positioned within a physical scene, including: providing a spatial awareness of the physical scene including position and orientation of a display screen to the optical camera; creating a feedback loop between the optical camera and lens to enable the lens to communicate lens settings to a scene registration; determining when focus of the lens is beyond an optical focal limit from the lens settings; and enabling the lens, by the scene registration, to automatically shift from using the optical lens parameters to using the virtual lens parameters when the focus of the lens moves beyond the optical focal limit.Type: ApplicationFiled: August 11, 2021Publication date: August 18, 2022Inventor: Tobias Anderberg
-
Publication number: 20210185218Abstract: Managing device settings on multiple devices, including: creating a first group of devices including two or more devices, wherein each device in the first group has at least one first device setting; creating a second group of devices including two or more devices, wherein each device in the second group has at least one second device setting and each device in the second group is not in the first group; sending the at least one first device setting to each device in the first group of devices in parallel, so that each device in the first group changes at least one device setting according to the at least one received first device setting; and sending the at least one second device setting to each device in the second group of devices in parallel, so that each device in the second group changes at least one device setting according to the at least one received second device setting.Type: ApplicationFiled: May 14, 2020Publication date: June 17, 2021Inventors: Tobias Anderberg, Scott Metzger
-
Publication number: 20210183222Abstract: Managing cameras and controlling video display, including: identifying a first group of cameras, each camera in the first group having a respective image stream; identifying a second group of cameras, each camera in the second group having a respective image stream, and each camera in the second group is not in the first group; assigning a name to each group; assigning a name to each camera; displaying the name of each group in a user interface on a computer system; displaying the name of each camera in the user interface; receiving a selection of a group through the user interface; displaying the image stream for each camera in the selected group simultaneously; receiving a selection of one camera through the user interface; and displaying the image stream for the selected camera.Type: ApplicationFiled: December 10, 2020Publication date: June 17, 2021Inventors: Tobias Anderberg, Scott Metzger
-
Publication number: 20210183144Abstract: Managing volumetric data, including: defining a view volume in a volume of space, wherein the volumetric data has multiple points in the volume of space and at least one point is in the view volume and at least one point is not in the view volume; defining a grid in the volume of space, the grid having multiple cells and dividing the volume of space into respective cells, wherein each point has a corresponding cell in the grid, and each cell in the grid has zero or more corresponding points; and reducing the number of points for a cell in the grid where that cell is outside the view volume.Type: ApplicationFiled: May 15, 2020Publication date: June 17, 2021Inventors: Brad Hunt, Tobias Anderberg
-
Publication number: 20210183134Abstract: Capturing and visualizing video, including: capturing video data using a plurality of cameras; sending the captured video data to a first shader; calculating depth information at the first shader using the captured video data; generating a three-dimensional (3-D) point cloud using the depth information; and rendering a visualization image using the 3-D point cloud.Type: ApplicationFiled: May 26, 2020Publication date: June 17, 2021Inventor: Tobias Anderberg
-
Publication number: 20210185214Abstract: Aligning and coloring a volumetric image, including: capturing intensity data using at least one scanner; generating an intensity image using the intensity data, wherein the intensity image includes at least one feature in a scene, the at least one feature including a sample feature; capturing image data using at least one camera, wherein the image data includes color information; generating a camera image using the image data, wherein the camera image includes the sample feature; matching the sample feature in the intensity image with the sample feature in the camera image to align the intensity image and the camera image; and generating a color image by applying the color information to the aligned intensity image.Type: ApplicationFiled: May 26, 2020Publication date: June 17, 2021Inventors: Tobias Anderberg, Scott Metzger
-
Patent number: 11010912Abstract: Method that merges two or more point clouds captured from a scene, eliminates redundant points, and retains points that best represent the scene. The method may generally include a detection step, which locates points from different clouds that are close and thus potentially redundant, followed by a selection step that identifies preferred points. Clouds may be represented as range images, which may simplify both steps. Closeness testing may be optimized by dividing range images into tiles and testing tile bounding volumes for intersections between clouds. Selection of preferred points may incorporate user input, or it may be fully or partially automated. User selection may be performed using 2D drawing tools on range images to identify images with preferred views of a scene. Automated selection may assign a quality measure to points based for example on the surface resolution of each point cloud scan at overlapping points.Type: GrantFiled: July 3, 2019Date of Patent: May 18, 2021Assignee: Nurulize, Inc.Inventors: Tobias Anderberg, Malik Williams
-
Publication number: 20210072394Abstract: Enabling colorization and color adjustments on 3D point clouds, which are projected onto a 2D view with an equirectangular projection. A user may color regions on the 2D view and preview the changes immediately in a 3D view of the point cloud. Embodiments render the color of each point in the point cloud by testing whether the 2D projection of the point is inside the colored region. Applications may include generation of a color 3D virtual reality environment using point clouds and color-adjusted imagery.Type: ApplicationFiled: November 23, 2020Publication date: March 11, 2021Inventor: Tobias Anderberg
-
Patent number: 10845484Abstract: Enabling colorization and color adjustments on 3D point clouds, which are projected onto a 2D view with an equirectangular projection. A user may color regions on the 2D view and preview the changes immediately in a 3D view of the point cloud. Embodiments render the color of each point in the point cloud by testing whether the 2D projection of the point is inside the colored region. Applications may include generation of a color 3D virtual reality environment using point clouds and color-adjusted imagery.Type: GrantFiled: July 3, 2019Date of Patent: November 24, 2020Assignee: Nurulize, Inc.Inventor: Tobias Anderberg
-
Publication number: 20200273193Abstract: Method that merges two or more point clouds captured from a scene, eliminates redundant points, and retains points that best represent the scene. The method may generally include a detection step, which locates points from different clouds that are close and thus potentially redundant, followed by a selection step that identifies preferred points. Clouds may be represented as range images, which may simplify both steps. Closeness testing may be optimized by dividing range images into tiles and testing tile bounding volumes for intersections between clouds. Selection of preferred points may incorporate user input, or it may be fully or partially automated. User selection may be performed using 2D drawing tools on range images to identify images with preferred views of a scene. Automated selection may assign a quality measure to points based for example on the surface resolution of each point cloud scan at overlapping points.Type: ApplicationFiled: July 3, 2019Publication date: August 27, 2020Inventors: Tobias Anderberg, Malik Williams