Patents by Inventor Kenneth J. Mitchell

Kenneth J. Mitchell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961186
    Abstract: Visually seamless grafting of volumetric data. In some implementations, a method includes obtaining volumetric data that represents a first volume including one or more three-dimensional objects. Planar slices of the first volume are determined and for each planar slice, a result region and an outer region are determined, the outer region located between the result region and an edge of the planar slice. A target region is determined within the result region and adjacent to an edge of the result region. The result region is modified by updating source voxels in the target region based on corresponding continuity voxels in the outer region, and the updating is weighted based on a distance of each source voxel from an associated edge of the result region. The modified result regions are grafted to a second volume at the edge of the result regions to provide a grafted volume.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: April 16, 2024
    Assignee: Roblox Corporation
    Inventor: Kenneth J. Mitchell
  • Publication number: 20240005605
    Abstract: Visually seamless grafting of volumetric data. In some implementations, a method includes obtaining volumetric data that represents a first volume including one or more three-dimensional objects. Planar slices of the first volume are determined and for each planar slice, a result region and an outer region are determined, the outer region located between the result region and an edge of the planar slice. A target region is determined within the result region and adjacent to an edge of the result region. The result region is modified by updating source voxels in the target region based on corresponding continuity voxels in the outer region, and the updating is weighted based on a distance of each source voxel from an associated edge of the result region. The modified result regions are grafted to a second volume at the edge of the result regions to provide a grafted volume.
    Type: Application
    Filed: June 30, 2022
    Publication date: January 4, 2024
    Applicant: Roblox Corporation
    Inventor: Kenneth J. MITCHELL
  • Patent number: 11595630
    Abstract: Techniques to facilitate compression of depth data and real-time reconstruction of high-quality light fields. A parameter space of values for a line, pairs of endpoints on different sides of the line, and a palette index for each pixel of a pixel tile of a depth image is sampled. Values for the line, the pairs of endpoints, and the palette index that minimize an error are determined and stored.
    Type: Grant
    Filed: September 23, 2021
    Date of Patent: February 28, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Kenneth J. Mitchell, Charalampos Koniaris, Malgorzata E. Kosek, David A. Sinclair
  • Patent number: 11475647
    Abstract: Systems and methods are presented for immersive and simultaneous animation in a mixed reality environment. Techniques disclosed represent a physical object, present at a scene, in a 3D space of a virtual environment associated with the scene. A virtual element is posed relative to the representation of the physical object in the virtual environment. The virtual element is displayed to users from a perspective of each user in the virtual environment. Responsive to an interaction of one user with the virtual element, an edit command is generated and the pose of the virtual element is adjusted in the virtual environment according to the edit command. The display of the virtual element to the users is then updated according to the adjusted pose. When simultaneous and conflicting edit commands are generated by collaborating users, policies to reconcile the conflicting edit commands are disclosed.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: October 18, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: Corey D. Drake, Kenneth J. Mitchell, Rachel E. Rodgers, Joseph G. Hager, IV, Kyna P. McIntosh, Ye Pan
  • Publication number: 20220215634
    Abstract: Systems and methods are presented for immersive and simultaneous animation in a mixed reality environment, Techniques disclosed represent a physical object, present at a scene, in a 3D space of a virtual environment associated with the scene. A virtual element is posed relative to the representation of the physical object in the virtual environment. The virtual element is displayed to users from a perspective of each user in the virtual environment. Responsive to an interaction of one user with the virtual element, an edit command is generated and the pose of the virtual element is adjusted in the virtual environment according to the edit command. The display of the virtual element to the users is then updated according to the adjusted pose. When simultaneous and conflicting edit commands are generated by collaborating users, policies to reconcile the conflicting edit commands are disclosed.
    Type: Application
    Filed: April 28, 2021
    Publication date: July 7, 2022
    Inventors: Corey D. Drake, Kenneth J. Mitchell, Rachel E. Rodgers, Joseph G. Hager, IV, Kyna P. McIntosh, Ye Pan
  • Patent number: 11288859
    Abstract: Embodiments provide techniques for rendering augmented reality effects on an image of a user's face in real time. The method generally includes receiving an image of a face of a user. A global facial depth map and a luminance map are generated based on the captured image. The captured image is segmented into a plurality of segments. For each segment in the plurality of segments, a displacement energy of the respective segment is minimized using a least square minimization of a linear system for the respective segment. The displacement energy is generally defined by a relationship between a detailed depth map, the global facial depth map and the luminance map. The detailed depth map is generated based on the minimized displacement energy for each segment in the plurality of segments. One or more visual effects are rendered over the captured image using the generated detailed depth map.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: March 29, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: Kenneth J. Mitchell, Llogari Casas Cambra, Yue Li
  • Publication number: 20220014725
    Abstract: Techniques to facilitate compression of depth data and real-time reconstruction of high-quality light fields. A parameter space of values for a line, pairs of endpoints on different sides of the line, and a palette index for each pixel of a pixel tile of a depth image is sampled. Values for the line, the pairs of endpoints, and the palette index that minimize an error are determined and stored.
    Type: Application
    Filed: September 23, 2021
    Publication date: January 13, 2022
    Inventors: Kenneth J. MITCHELL, Charalampos KONIARIS, Malgorzata E. KOSEK, David A. SINCLAIR
  • Publication number: 20210375029
    Abstract: Embodiments provide techniques for rendering augmented reality effects on an image of a user's face in real time. The method generally includes receiving an image of a face of a user. A global facial depth map and a luminance map are generated based on the captured image. The captured image is segmented into a plurality of segments. For each segment in the plurality of segments, a displacement energy of the respective segment is minimized using a least square minimization of a linear system for the respective segment. The displacement energy is generally defined by a relationship between a detailed depth map, the global facial depth map and the luminance map. The detailed depth map is generated based on the minimized displacement energy for each segment in the plurality of segments. One or more visual effects are rendered over the captured image using the generated detailed depth map.
    Type: Application
    Filed: June 1, 2020
    Publication date: December 2, 2021
    Inventors: Kenneth J. MITCHELL, Llogari CASAS CAMBRA, Yue LI
  • Patent number: 11153550
    Abstract: Systems, methods, and articles of manufacture are disclosed that enable the compression of depth data and real-time reconstruction of high-quality light fields. In one aspect, spatial compression and decompression of depth images is divided into the following stages: generating a quadtree data structure for each depth image captured by a light field probe and difference mask associated with the depth image, with each node of the quadtree approximating a corresponding portion of the depth image data using an approximating function; generating, from the quadtree for each depth image, a runtime packed form that is more lightweight and has a desired maximum error; and assembling multiple such runtime packed forms into per-probe stream(s); and decoding at runtime the assembled per-probe stream(s). Further, a block compression format is disclosed for approximating depth data by augmenting the block compression format 3DC+(BC4) with a line and two pairs of endpoints.
    Type: Grant
    Filed: September 21, 2018
    Date of Patent: October 19, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Kenneth J. Mitchell, Charalampos Koniaris, Malgorzata E. Kosek, David A. Sinclair
  • Patent number: 11087529
    Abstract: Embodiments provide for the rendering of illumination effects on real-world objects in augmented reality systems. An example method generally includes overlaying a shader on the augmented reality display. The shader generally corresponds to a three-dimensional geometry of an environment in which the augmented reality display is operating, and the shader generally comprises a plurality of vertices forming a plurality of polygons. A computer-generated lighting source is introduced into the augmented reality display. One or more polygons of the shader are illuminated based on the computer-generated lighting source, thereby illuminating one or more real-world objects in the environment with direct lighting from the computer-generated lighting source and reflected and refracted lighting from surfaces in the environment.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: August 10, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Jason A. Yeung, Kenneth J. Mitchell, Timothy M. Panec, Elliott H. Baumbach, Corey D. Drake
  • Patent number: 11024098
    Abstract: Systems and methods are presented for immersive and simultaneous animation in a mixed reality environment. Techniques disclosed represent a physical object, present at a scene, in a 3D space of a virtual environment associated with the scene. A virtual element is posed relative to the representation of the physical object in the virtual environment. The virtual element is displayed to users from a perspective of each user in the virtual environment. Responsive to an interaction of one user with the virtual element, an edit command is generated and the pose of the virtual element is adjusted in the virtual environment according to the edit command. The display of the virtual element to the users is then updated according to the adjusted pose. When simultaneous and conflicting edit commands are generated by collaborating users, policies to reconcile the conflicting edit commands are disclosed.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: June 1, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Corey D. Drake, Kenneth J. Mitchell, Rachel E. Rodgers, Joseph G. Hager, IV, Kyna P. McIntosh, Ye Pan
  • Publication number: 20210110001
    Abstract: Techniques for animatronic design are provided. A plurality of simulated meshes is generated using a physics simulation model, where the plurality of simulated meshes corresponds to a plurality of actuator configurations for an animatronic mechanical design. A machine learning model is trained based on the plurality of simulated meshes and the plurality of actuator configurations. A plurality of predicted meshes is generated for the animatronic mechanical design, using the machine learning model, based on a second plurality of actuator configurations. Virtual animation of the animatronic mechanical design is facilitated based on the plurality of predicted meshes.
    Type: Application
    Filed: October 15, 2019
    Publication date: April 15, 2021
    Inventors: Kenneth J. MITCHELL, Matthew W. MCCRORY, Jeremy Oliveira STOLARZ, Joel D. CASTELLON, Moritz N. BÄCHER, Alfredo M. AYALA, JR.
  • Publication number: 20210097757
    Abstract: Embodiments provide for the rendering of illumination effects on real-world objects in augmented reality systems. An example method generally includes overlaying a shader on the augmented reality display. The shader generally corresponds to a three-dimensional geometry of an environment in which the augmented reality display is operating, and the shader generally comprises a plurality of vertices forming a plurality of polygons. A computer-generated lighting source is introduced into the augmented reality display. One or more polygons of the shader are illuminated based on the computer-generated lighting source, thereby illuminating one or more real-world objects in the environment with direct lighting from the computer-generated lighting source and reflected and refracted lighting from surfaces in the environment.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 1, 2021
    Inventors: Jason A. YEUNG, Kenneth J. MITCHELL, Timothy M. PANEC, Elliott H. BAUMBACH, Corey D. DRAKE
  • Patent number: 10937220
    Abstract: Embodiments provide for animation streaming for media interaction by receiving, at a generator, inputs from a target device presenting of a virtual environment; updating, based on the user inputs, a model of the virtual environment; determining network conditions between the generator and target device; generating a packet that includes a forecasted animation set for a virtual object in the updated model that comprises rig updates for the virtual object for at least two different states, and a number of states included in the packet is based on the network conditions; and streaming the packet to the target device, where the target device: receives a second input to interact with the virtual environment that changes the virtual environment to a given state; selects and applies a rig update associated with the given state a local model of the virtual object; and outputs the updated local model on the target device.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: March 2, 2021
    Assignee: Disney Enterprises, Inc.
    Inventor: Kenneth J. Mitchell
  • Patent number: 10902343
    Abstract: Training data from multiple types of sensors and captured in previous capture sessions can be fused within a physics-based tracking framework to train motion priors using different deep learning techniques, such as convolutional neural networks (CNN) and Recurrent Temporal Restricted Boltzmann Machines (RTRBMs). In embodiments employing one or more CNNs, two streams of filters can be used. In those embodiments, one stream of the filters can be used to learn the temporal information and the other stream of the filters can be used to learn spatial information. In embodiments employing one or more RTRBMs, all visible nodes of the RTRBMs can be clamped with values obtained from the training data or data synthesized from the training data. In cases where sensor data is unavailable, the input nodes may be unclamped and the one or more RTRBMs can generate the missing sensor data.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: January 26, 2021
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Sheldon Andrews, Ivan Huerta Casado, Kenneth J. Mitchell, Leonid Sigal
  • Publication number: 20200334886
    Abstract: Embodiments provide for animation streaming for media interaction by receiving, at a generator, inputs from a target device presenting of a virtual environment; updating, based on the user inputs, a model of the virtual environment; determining network conditions between the generator and target device; generating a packet that includes a forecasted animation set for a virtual object in the updated model that comprises rig updates for the virtual object for at least two different states, and a number of states included in the packet is based on the network conditions; and streaming the packet to the target device, where the target device: receives a second input to interact with the virtual environment that changes the virtual environment to a given state; selects and applies a rig update associated with the given state a local model of the virtual object; and outputs the updated local model on the target device.
    Type: Application
    Filed: April 22, 2019
    Publication date: October 22, 2020
    Inventor: Kenneth J. MITCHELL
  • Patent number: 10783704
    Abstract: Techniques for constructing a three-dimensional model of facial geometry are disclosed. A first three-dimensional model of an object is generated, based on a plurality of captured images of the object. A projected three-dimensional model of the object is determined, based on a plurality of identified blendshapes relating to the object. A second three-dimensional model of the object is generated, based on the first three-dimensional model of the object and the projected three dimensional model of the object.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: September 22, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Kenneth J. Mitchell, Frederike Dümbgen, Shuang Liu
  • Patent number: 10636201
    Abstract: Systems, methods, and articles of manufacture for real-time rendering using compressed animated light fields are disclosed. One embodiment provides a pipeline, from offline rendering of an animated scene from sparse optimized viewpoints to real-time rendering of the scene with freedom of movement, that includes three stages: offline preparation and rendering, stream compression, and real-time decompression and reconstruction. During offline rendering, optimal placements for cameras in the scene are determined, and color and depth images are rendered using such cameras. Color and depth data is then compressed using an integrated spatial and temporal scheme permitting high performance on graphics processing units for virtual reality applications.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: April 28, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Kenneth J. Mitchell, Charalampos Koniaris, Malgorzata E. Kosek, David A. Sinclair
  • Publication number: 20200105056
    Abstract: Techniques for constructing a three-dimensional model of facial geometry are disclosed. A first three-dimensional model of an object is generated, based on a plurality of captured images of the object. A projected three-dimensional model of the object is determined, based on a plurality of identified blendshapes relating to the object. A second three-dimensional model of the object is generated, based on the first three-dimensional model of the object and the projected three dimensional model of the object.
    Type: Application
    Filed: September 27, 2018
    Publication date: April 2, 2020
    Inventors: Kenneth J. MITCHELL, Frederike DÜMBGEN, Shuang LIU
  • Patent number: 10482570
    Abstract: A system for performing memory allocation for seamless media content presentation includes a computing platform having a CPU, a GPU having a GPU memory, and a main memory storing a memory allocation software code. The CPU executes the memory allocation software code to transfer a first dataset of media content to the GPU memory, seamlessly present the media content to a system user, register a location of the system user during the seamless presentation of the media content, and register a timecode status of the media content at the location. The CPU further executes the memory allocation software code to identify a second dataset of the media content based on the location and the timecode status, transfer a first differential dataset to the GPU memory, continue to seamlessly present the media content to the system user, and transfer a second differential dataset out of the GPU memory.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: November 19, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Kenneth J. Mitchell, Charalampos Koniaris, Floyd M. Chitalu