Patents by Inventor Dejan Momcilovic

Dejan Momcilovic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230236319
    Abstract: An active marker relay system is provided to operate responsive active markers coupled to an object in a live action scene for performance capture, via a trigger unit that relays energy pulse information to responsive active markers. Using use simple sensors, the responsive active markers sense control energy pulses projected from the trigger unit. In return, the responsive active markers produce energy pulses that emulate at least one characteristic of the control energy pulses, such as a particular pulse rate or wavelength of energy. The reactivity of the responsive active markers to control energy pulses enables simple control of the responsive active markers through the trigger unit.
    Type: Application
    Filed: January 25, 2023
    Publication date: July 27, 2023
    Applicant: Unity Technologies SF
    Inventors: Dejan Momcilovic, Jake Botting
  • Patent number: 11710247
    Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.
    Type: Grant
    Filed: October 27, 2020
    Date of Patent: July 25, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
  • Publication number: 20230186550
    Abstract: Methods and systems are presented for generating a rendering of a virtual scene of a plurality of virtual scene elements. Rendering can take into account a camera position of a camera in a stage environment that is to be used to capture a captured scene, a display position of a virtual scene display in the stage environment, a set of depth slices, wherein a depth slice of the set of depth slices represents a subregion of the virtual scene space, and a blur factor for the depth slice based at least in part on the camera position, the display position, and a depth value or depth range for the subregion of the virtual scene space represented by the depth slice. Using depth slices can reduce computational efforts.
    Type: Application
    Filed: December 9, 2021
    Publication date: June 15, 2023
    Inventors: Kimball D. Thurston, III, Joseph W. Marks, Luca Fascione, Millicent Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman
  • Publication number: 20230186434
    Abstract: Methods and systems are presented for generating a virtual scene usable in a captured scene with focus settings that take into account camera position. Virtual displayed in a virtual scene that is presented on a display wall and captured in a scene can be presented in the virtual scene with a focus or defocus that is dependent on a virtual object position in the virtual scene and a position of a camera relative to the display wall. Defocusing of virtual objects can be such that an eventual defocus when captured by the camera corresponds to what would be a defocus of an object distant from the camera by a distance that represents a first distance from the camera to the display wall and a second distance being a virtual distance in the virtual scene from the virtual object to a virtual camera plane of the virtual scene.
    Type: Application
    Filed: December 9, 2021
    Publication date: June 15, 2023
    Inventors: Kimball D. Thurston, III, Joseph W. Marks, Luca Fascione, Millicent Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman
  • Publication number: 20230188701
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, stereoscopic image data of the live action scene having a live actor and the display wall displaying a rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Thereafter, background pixels for the precursor image on the display wall in the stereoscopic image data is moved to generate stereo-displaced pixels using the precursor metadata, the display wall metadata, and/or the image matte.
    Type: Application
    Filed: December 10, 2021
    Publication date: June 15, 2023
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Publication number: 20230188693
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values for a replacement wall image of higher resolution than the precursor image is determined, and the image data of the captured scene is adjusted using the pixel display values and the image matte.
    Type: Application
    Filed: December 10, 2021
    Publication date: June 15, 2023
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Publication number: 20230188699
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall Image quality levels for display wall portions of the display wall in the image data is determined, and pixels associated with the display wall in the image data are adjusted to the image quality levels.
    Type: Application
    Filed: December 10, 2021
    Publication date: June 15, 2023
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Patent number: 11677928
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall Image quality levels for display wall portions of the display wall in the image data is determined, and pixels associated with the display wall in the image data are adjusted to the image quality levels.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: June 13, 2023
    Assignee: Unity Technologies SF
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Patent number: 11677923
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values for a replacement wall image of higher resolution than the precursor image is determined, and the image data of the captured scene is adjusted using the pixel display values and the image matte.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: June 13, 2023
    Assignee: Unity Technologies SF
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Publication number: 20230171507
    Abstract: A processor obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value and a second pixel value of the display. Upon detecting a region of the input image having an original pixel value above the threshold, the processor can create a data structure including a location of the region in the input image and an original pixel value of the region. The data structure occupies less memory than the input image. The display presents the input image including the region of the image having the original pixel value above the threshold. The processor sends the data structure to a camera, which records the presented image. The processor performing postprocessing obtains the data structure and the recorded image and increases dynamic range of the recorded image by modifying the recorded image based on the data structure.
    Type: Application
    Filed: June 16, 2022
    Publication date: June 1, 2023
    Inventors: Joseph W. Marks, Luca Fascione, Kimball D. Thurston, III, Millie Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman, Jonathan S. Swartz, Lena Petrovic
  • Publication number: 20230171506
    Abstract: A processor obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value and a second pixel value of the display. Upon detecting a region of the input image having an original pixel value above the threshold, the processor can create a data structure including a location of the region in the input image and an original pixel value of the region. The data structure occupies less memory than the input image. The display presents the input image including the region of the image having the original pixel value above the threshold. The processor sends the data structure to a camera, which records the presented image. The processor performing postprocessing obtains the data structure and the recorded image and increases dynamic range of the recorded image by modifying the recorded image based on the data structure.
    Type: Application
    Filed: June 8, 2022
    Publication date: June 1, 2023
    Inventors: Joseph W. Marks, Luca Fascione, Kimball D. Thurston, III, Millie Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman, Jonathan S. Swartz, Lena Petrovic
  • Publication number: 20230171508
    Abstract: A processor performing postprocessing obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value of the virtual production display and a second pixel value of the virtual production display. The processor modifies the region according to predetermined steps producing a pattern unlikely to occur within the input image, where the pattern corresponds to a difference between the original pixel value and the threshold. The processor can replace the region of the input image with the pattern to obtain a modified image. The virtual production display can present the modified image. A processor performing postprocessing detects the pattern within the modified image displayed on the virtual production display. The processor calculates the original pixel value of the region by reversing the predetermined steps. The processor replaces the pattern in the modified image with the original pixel value.
    Type: Application
    Filed: July 1, 2022
    Publication date: June 1, 2023
    Inventors: Joseph W. Marks, Luca Fascione, Kimball D. Thurston, III, Millie Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman, Jonathan S. Swartz
  • Patent number: 11636621
    Abstract: Embodiments facilitate the calibration of cameras in a live action scene using fixed cameras and drones. In some embodiments, a method configures a plurality of reference cameras to observe at least three known reference points located in the live action scene and to observe one or more reference points associated with one or more moving cameras having unconstrained motion. The method further configures the one or more moving cameras to observe one or more moving objects in the live action scene. The method further receives reference point data in association with one or more reference cameras of the plurality of reference cameras, where the reference point data is based on the at least three known reference points and the one or more reference points associated with the one or more moving cameras.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: April 25, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Dejan Momcilovic, Jake Botting
  • Patent number: 11627297
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, stereoscopic image data of the live action scene is received, and display wall metadata of the precursor image is determined. Further, a first portion of the stereoscopic image data comprising the stage element in the live action scene is determined based on the stereoscopic image data and the display wall metadata. A second portion of the stereoscopic image data comprising the display wall in the live action scene with the display wall displaying the precursor image is also determined. Thereafter, an image matte for the stereoscopic image data is generated based on the first portion and the second portion.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: April 11, 2023
    Assignee: Unity Technologies SF
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Patent number: 11615755
    Abstract: The disclosed system modifies luminance of a display associated with a selective screen. The display provides a camera with an image having resolution higher than the resolution of the display by presenting multiple images while the selective screen enables light from different portions of the multiple images to reach the camera. The resulting luminance of the recorded image is lower than a combination of luminance values of the multiple images. The processor obtains a criterion indicating a property of the input image where image detail is unnecessary. The processor detects a region of the input image satisfying the criterion, and determines a region of the selective screen corresponding to the region of the input image. The processor increases the luminance of the display by disabling the region of the selective screen corresponding to the region of the input image.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: March 28, 2023
    Assignee: Unity Technologies SF
    Inventors: Joseph W. Marks, Luca Fascione, Kimball D. Thurston, III, Millie Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman, Jonathan S. Swartz, Carter Bart Sullivan
  • Patent number: 11600022
    Abstract: Embodiments facilitate the calibration of cameras in a live action scene using drones. In some embodiments, a method configures a plurality of reference cameras to observe at least one portion of the live action scene. The method further configures one or more moving cameras having unconstrained motion to observe one or more moving objects in the live action scene and to observe at least three known reference points associated with the plurality of reference cameras. The method further receives reference point data in association with the one or more moving cameras, where the reference point data is based on the at least three known reference points. The method further computes a location and an orientation of each moving camera of the one or more moving cameras based on one or more of the reference point data and one or more locations of one or more reference cameras of the plurality of reference cameras.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: March 7, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Dejan Momcilovic, Jake Botting
  • Patent number: 11593993
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values to add or modify an image effect or a visual effect are determined, and the image data is adjusted using the pixel display values and the image matte.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: February 28, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Patent number: 11537162
    Abstract: Embodiments provide a wearable article for a performance capture system. In some embodiments, a wearable article includes one or more regions, where the one or more regions are configured to be worn on at least a portion of a body of a user, where the one or more regions have a first pliability and a second pliability, where the first pliability and the second pliability are different pliabilities, and where at least one of the one or more regions are configured to hold devices in predetermined positions while maintaining shape and respective pliability. In some embodiments, the wearable article also includes a plurality of mounting mechanisms coupled to the one or more regions for mounting one or more reference markers to be used for position determination.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: December 27, 2022
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Dejan Momcilovic, Jake Botting
  • Patent number: 11514654
    Abstract: Methods and systems are presented for determining a virtual focus model for a camera apparatus, the camera apparatus comprising one or more image capture elements and one or more optics device through which light in an optical path passes from a stage environment to at least one of the one or more image capture elements, the stage environment including virtual scene display for displaying a virtual scene.
    Type: Grant
    Filed: December 9, 2021
    Date of Patent: November 29, 2022
    Assignee: Unity Technologies SF
    Inventors: Kimball D. Thurston, III, Joseph W. Marks, Luca Fascione, Millicent Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman
  • Patent number: 11508081
    Abstract: A sealed active marker apparatus of a performance capture system is described to provide protective housing for active marker light components coupled to a strand and attached via a receptacle, to an object, such as via a wearable article, in a live action scene. The receptacle includes a protrusion portion that permits at least one particular wavelength range of light emitted from the enclosed active marker light component, to diffuse in a manner that enables easy detection by a sensor device. A base portion interlocks with a bottom plate of the receptacle to secure the strand within one or more channels. A sealant material coating portions of the apparatus promotes an insulating environment for the active marker light component.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: November 22, 2022
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Dejan Momcilovic, Jake Botting