Patents by Inventor Peter M. Hillman

Peter M. Hillman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11887274
    Abstract: An image dataset comprising pixel depth arrays might be processed by an interpolator, wherein interpolation is based on pixel samples. Input pixels to be interpolated from and an interpolated pixel might comprise deep pixels, each represented with a list of samples. Accumulation curves might be generated from each input pixel, weights applied, and accumulation curves combined to form an interpolation accumulation curve. An interpolated deep pixel can be derived from the interpolation accumulation curve, taking into account zero-depth samples as needed. Samples might represent color values of pixels.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: January 30, 2024
    Assignee: Unity Technologies SF
    Inventor: Peter M. Hillman
  • Patent number: 11810248
    Abstract: An image dataset is processed with a shadow map generated from objects in a virtual scene that can cast shadows and the scene is rendered independent of the shadows. The shadow might be edited separately, and then applied to a post-render image of the scene to form a shadowed image. Light factor values for pixels of the shadow map might be stored as summed-area table values.
    Type: Grant
    Filed: January 24, 2022
    Date of Patent: November 7, 2023
    Assignee: Unity Technologies SF
    Inventor: Peter M Hillman
  • Publication number: 20230281909
    Abstract: In an image processing system, an image insertion is to be included onto, or relative to, a first and second frame, each depicting images of a set of objects of a geometric model. A point association is determined for a depicted object that is depicted in both the first frame and the second frame, representing reference coordinates in a virtual scene space of a first location on the depicted object independent of at least one position change and a mapping of a first image location in the first image to where the first location appears in the first image. A corresponding location in the second image is determined based on where the first location on the depicted object appears according to the reference coordinate in the virtual scene space and a second image location on the second image where the first location appears in the second image.
    Type: Application
    Filed: July 1, 2022
    Publication date: September 7, 2023
    Inventor: Peter M. Hillman
  • Patent number: 11694313
    Abstract: An imagery processing system determines pixel color values for pixels of captured imagery from volumetric data, providing alternative pixel color values. A main imagery capture device, such as a camera, captures main imagery such as still images and/or video sequences, of a live action scene. Alternative devices capture imagery of the live action scene, in some spectra and form, and capture information related to pixel color values for multiple depths of a scene, which can be processed to provide reconstruction.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: July 4, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Kimball D. Thurston, III, Peter M. Hillman
  • Patent number: 11689815
    Abstract: An imagery processing system determines alternative pixel color values for pixels of captured imagery where the alternative pixel color values are obtained from alternative sources. A main imagery capture device, such as a camera, captures main imagery such as still images and/or video sequences, of a live action scene. Alternative devices capture imagery of the live action scene, in some spectra and form, and that alternative imagery is processed to provide user-selectable alternatives for pixel ranges from the main imagery.
    Type: Grant
    Filed: December 9, 2021
    Date of Patent: June 27, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Kimball D. Thurston, III, Peter M. Hillman
  • Publication number: 20230188701
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, stereoscopic image data of the live action scene having a live actor and the display wall displaying a rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Thereafter, background pixels for the precursor image on the display wall in the stereoscopic image data is moved to generate stereo-displaced pixels using the precursor metadata, the display wall metadata, and/or the image matte.
    Type: Application
    Filed: December 10, 2021
    Publication date: June 15, 2023
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Publication number: 20230186550
    Abstract: Methods and systems are presented for generating a rendering of a virtual scene of a plurality of virtual scene elements. Rendering can take into account a camera position of a camera in a stage environment that is to be used to capture a captured scene, a display position of a virtual scene display in the stage environment, a set of depth slices, wherein a depth slice of the set of depth slices represents a subregion of the virtual scene space, and a blur factor for the depth slice based at least in part on the camera position, the display position, and a depth value or depth range for the subregion of the virtual scene space represented by the depth slice. Using depth slices can reduce computational efforts.
    Type: Application
    Filed: December 9, 2021
    Publication date: June 15, 2023
    Inventors: Kimball D. Thurston, III, Joseph W. Marks, Luca Fascione, Millicent Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman
  • Publication number: 20230188693
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values for a replacement wall image of higher resolution than the precursor image is determined, and the image data of the captured scene is adjusted using the pixel display values and the image matte.
    Type: Application
    Filed: December 10, 2021
    Publication date: June 15, 2023
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Publication number: 20230188699
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall Image quality levels for display wall portions of the display wall in the image data is determined, and pixels associated with the display wall in the image data are adjusted to the image quality levels.
    Type: Application
    Filed: December 10, 2021
    Publication date: June 15, 2023
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Publication number: 20230186434
    Abstract: Methods and systems are presented for generating a virtual scene usable in a captured scene with focus settings that take into account camera position. Virtual displayed in a virtual scene that is presented on a display wall and captured in a scene can be presented in the virtual scene with a focus or defocus that is dependent on a virtual object position in the virtual scene and a position of a camera relative to the display wall. Defocusing of virtual objects can be such that an eventual defocus when captured by the camera corresponds to what would be a defocus of an object distant from the camera by a distance that represents a first distance from the camera to the display wall and a second distance being a virtual distance in the virtual scene from the virtual object to a virtual camera plane of the virtual scene.
    Type: Application
    Filed: December 9, 2021
    Publication date: June 15, 2023
    Inventors: Kimball D. Thurston, III, Joseph W. Marks, Luca Fascione, Millicent Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman
  • Patent number: 11677928
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall Image quality levels for display wall portions of the display wall in the image data is determined, and pixels associated with the display wall in the image data are adjusted to the image quality levels.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: June 13, 2023
    Assignee: Unity Technologies SF
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Patent number: 11677923
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values for a replacement wall image of higher resolution than the precursor image is determined, and the image data of the captured scene is adjusted using the pixel display values and the image matte.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: June 13, 2023
    Assignee: Unity Technologies SF
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Publication number: 20230171507
    Abstract: A processor obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value and a second pixel value of the display. Upon detecting a region of the input image having an original pixel value above the threshold, the processor can create a data structure including a location of the region in the input image and an original pixel value of the region. The data structure occupies less memory than the input image. The display presents the input image including the region of the image having the original pixel value above the threshold. The processor sends the data structure to a camera, which records the presented image. The processor performing postprocessing obtains the data structure and the recorded image and increases dynamic range of the recorded image by modifying the recorded image based on the data structure.
    Type: Application
    Filed: June 16, 2022
    Publication date: June 1, 2023
    Inventors: Joseph W. Marks, Luca Fascione, Kimball D. Thurston, III, Millie Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman, Jonathan S. Swartz, Lena Petrovic
  • Publication number: 20230171506
    Abstract: A processor obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value and a second pixel value of the display. Upon detecting a region of the input image having an original pixel value above the threshold, the processor can create a data structure including a location of the region in the input image and an original pixel value of the region. The data structure occupies less memory than the input image. The display presents the input image including the region of the image having the original pixel value above the threshold. The processor sends the data structure to a camera, which records the presented image. The processor performing postprocessing obtains the data structure and the recorded image and increases dynamic range of the recorded image by modifying the recorded image based on the data structure.
    Type: Application
    Filed: June 8, 2022
    Publication date: June 1, 2023
    Inventors: Joseph W. Marks, Luca Fascione, Kimball D. Thurston, III, Millie Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman, Jonathan S. Swartz, Lena Petrovic
  • Publication number: 20230171508
    Abstract: A processor performing postprocessing obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value of the virtual production display and a second pixel value of the virtual production display. The processor modifies the region according to predetermined steps producing a pattern unlikely to occur within the input image, where the pattern corresponds to a difference between the original pixel value and the threshold. The processor can replace the region of the input image with the pattern to obtain a modified image. The virtual production display can present the modified image. A processor performing postprocessing detects the pattern within the modified image displayed on the virtual production display. The processor calculates the original pixel value of the region by reversing the predetermined steps. The processor replaces the pattern in the modified image with the original pixel value.
    Type: Application
    Filed: July 1, 2022
    Publication date: June 1, 2023
    Inventors: Joseph W. Marks, Luca Fascione, Kimball D. Thurston, III, Millie Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman, Jonathan S. Swartz
  • Patent number: 11627297
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, stereoscopic image data of the live action scene is received, and display wall metadata of the precursor image is determined. Further, a first portion of the stereoscopic image data comprising the stage element in the live action scene is determined based on the stereoscopic image data and the display wall metadata. A second portion of the stereoscopic image data comprising the display wall in the live action scene with the display wall displaying the precursor image is also determined. Thereafter, an image matte for the stereoscopic image data is generated based on the first portion and the second portion.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: April 11, 2023
    Assignee: Unity Technologies SF
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Patent number: 11615755
    Abstract: The disclosed system modifies luminance of a display associated with a selective screen. The display provides a camera with an image having resolution higher than the resolution of the display by presenting multiple images while the selective screen enables light from different portions of the multiple images to reach the camera. The resulting luminance of the recorded image is lower than a combination of luminance values of the multiple images. The processor obtains a criterion indicating a property of the input image where image detail is unnecessary. The processor detects a region of the input image satisfying the criterion, and determines a region of the selective screen corresponding to the region of the input image. The processor increases the luminance of the display by disabling the region of the selective screen corresponding to the region of the input image.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: March 28, 2023
    Assignee: Unity Technologies SF
    Inventors: Joseph W. Marks, Luca Fascione, Kimball D. Thurston, III, Millie Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman, Jonathan S. Swartz, Carter Bart Sullivan
  • Patent number: 11593993
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values to add or modify an image effect or a visual effect are determined, and the image data is adjusted using the pixel display values and the image matte.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: February 28, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Patent number: 11514654
    Abstract: Methods and systems are presented for determining a virtual focus model for a camera apparatus, the camera apparatus comprising one or more image capture elements and one or more optics device through which light in an optical path passes from a stage environment to at least one of the one or more image capture elements, the stage environment including virtual scene display for displaying a virtual scene.
    Type: Grant
    Filed: December 9, 2021
    Date of Patent: November 29, 2022
    Assignee: Unity Technologies SF
    Inventors: Kimball D. Thurston, III, Joseph W. Marks, Luca Fascione, Millicent Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman
  • Patent number: 11501468
    Abstract: An image dataset is compressed by combining depth values from pixel depth arrays, wherein combining criteria are based on object data and/or depth variations of depth values in the first pixel image value array and generating a modified image dataset wherein a first pixel image value array represented in a received image dataset by the first number of image value array samples is in turn represented in the modified image dataset by a second number of compressed image value array samples with the second number being less than or equal to the first number.
    Type: Grant
    Filed: October 27, 2021
    Date of Patent: November 15, 2022
    Assignee: Unity Technologies SF
    Inventor: Peter M. Hillman