Patents by Inventor Evan M. Goldberg

Evan M. Goldberg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240054689
    Abstract: A media enhancement system includes an augmented reality (AR) device having a display, processing hardware, and a memory storing software code. The processing hardware executes the software code to monitor media content including a sequence of moving images displayed on a display screen separate from the AR device, receive playhead data indicating a playhead state of a media playout device playing out the media content, and detect, based on monitoring the media content, one or more image(s) in the sequence of moving images as one or more anchor image(s). The software code is further executed to obtain, using the anchor image(s), one or more AR effect(s) associated with the anchor image(s), and render, based on the playhead data, the AR effect(s) on the display of the AR device, wherein the AR effect(s) is/are spatially and temporally aligned with the sequence of moving images being displayed on the display screen.
    Type: Application
    Filed: August 15, 2022
    Publication date: February 15, 2024
    Inventors: Evan M. Goldberg, Alexandra Christiansen, Jackson Aaron Rogow, Nicholas Maggio
  • Publication number: 20240054696
    Abstract: A system includes an augmented reality (AR) device having a display, processing hardware, and a memory storing software code. The processing hardware executes the software code to monitor media content including a sequence of images displayed on a display screen separate from the AR device, detect, based on monitoring the media content, an image in the sequence of images as an anchor image, and obtain, using the anchor image, one or more AR effect(s) associated with the anchor image. The processing hardware further executes the software code to determine a position and orientation of the AR device in relation to the display screen, and render, based on that position and orientation, the AR effect(s) on the display of the AR device, where the AR effect(s) include at least one intermediate scale AR effect having a scale intermediate between a display screen scale AR effect and a real-world scale AR effect.
    Type: Application
    Filed: August 15, 2022
    Publication date: February 15, 2024
    Inventors: Evan M. Goldberg, Jackson Aaron Rogow, Alexandra Christiansen
  • Publication number: 20240054690
    Abstract: A system includes an augmented reality (AR) device having a first display, processing hardware, and a memory storing software code. The processing hardware is configured to execute the software code to monitor media content including a sequence of moving images displayed on a second display separate from the AR device and controlled by a media player device, detect, based on monitoring the media content, an image in the sequence of moving images for enhancement by one or more AR effects. The processing hardware further executes the software code to render the one or more AR effects on the first display, and transmit, contemporaneously with rendering the one or more AR effects on the first display, a signal configured to pause or loop the playing of the media content on the second display.
    Type: Application
    Filed: August 15, 2022
    Publication date: February 15, 2024
    Inventors: Evan M. Goldberg, Alexandra Christiansen, Jackson Aaron Rogow, James Voris, Alice Jane Taylor
  • Patent number: 11804007
    Abstract: An image processing system includes a computing platform having processing hardware, a display, and a system memory storing a software code. The processing hardware is configured to execute the software code to receive a three-dimensional (3D) digital model, surround the 3D digital model with multiple virtual cameras oriented toward the 3D digital model, and generate, using the virtual cameras, a multiple renders of the 3D digital model. The processing hardware is further configured to execute the software code to generate a UV texture coordinate space for a surface projection of the 3D digital model, and to transfer, using the multiple renders, lighting color values for each of multiple surface portions of the 3D digital model to the UV texture coordinate space.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: October 31, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Dane M. Coffey, Siroberto Scerbo, Evan M. Goldberg, Christopher Richard Schroers, Daniel L Baker, Mark R. Mine, Erika Varis Doggett
  • Patent number: 11710272
    Abstract: An image processing system includes a computing platform having processing hardware, a display, and a system memory storing a software code. The processing hardware executes the software code to receive a digital object, surround the digital object with virtual cameras oriented toward the digital object, render, using each one of the virtual cameras, a depth map identifying a distance of that one of the virtual cameras from the digital object, and generate, using the depth map, a volumetric perspective of the digital object from a perspective of that one of the virtual cameras, resulting in multiple volumetric perspectives of the digital object. The processing hardware further executes the software code to merge the multiple volumetric perspectives of the digital object to form a volumetric representation of the digital object, and to convert the volumetric representation of the digital object to a renderable form.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: July 25, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Dane M. Coffey, Siroberto Scerbo, Daniel L. Baker, Mark R. Mine, Evan M. Goldberg
  • Publication number: 20230221804
    Abstract: Systems and methods generating a haptic output response are disclosed. Video content is displayed on a display. A location of a user touch on the display is detected while the video content is being displayed. A region of interest in the video content is determined based on the location of the user touch. And a haptic output response is generated to a user. A characteristic of the haptic output response is determined using one or more characteristics of the region of interest.
    Type: Application
    Filed: February 10, 2023
    Publication date: July 13, 2023
    Inventors: Evan M. Goldberg, Daniel L. Baker, Jackson A. Rogow
  • Patent number: 11656578
    Abstract: Techniques for using holographic imagery for eyeline reference for performers. A first computer generated object is identified for display to a first performer at a designated physical position on a set. A first holographic projection of the first computer generated object is generated using a first holographic display. The first holographic display is configured to make the first holographic projection appear, to the first performer, to be located at the designated physical position on the set. One or more images of the performer are captured using an image capture device with a field of view that encompasses both the first performer and the designated physical position on the set. The captured one or more images depict the first performer and do not depict the first holographic projection. The first computer generated object is added to the captured one or more images after the capturing.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: May 23, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Evan M. Goldberg, Alexa L. Hale
  • Patent number: 11604516
    Abstract: A method includes displaying, on a touchscreen, a video comprising a video frame and determining, based on a saliency map of the video frame, a region of interest in the video frame. The method also includes detecting a touch on a region of the touchscreen while the video frame is displayed and generating a haptic response in response to determining that the region of the touchscreen overlaps with the region of interest.
    Type: Grant
    Filed: December 17, 2020
    Date of Patent: March 14, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Evan M. Goldberg, Daniel L. Baker, Jackson A. Rogow
  • Patent number: 11587284
    Abstract: In one implementation, a virtual-world simulator includes a computing platform having a hardware processor and a memory storing a software code, a tracking system communicatively coupled to the computing platform, and a projection device communicatively coupled to the computing platform. The hardware processor is configured to execute the software code to obtain a map of a geometry of a real-world venue including the virtual-world simulator, to identify one or more virtual effects for display in the real-world venue, and to use the tracking system to track a moving perspective of one of a user in the real-world venue or a camera in the real-world venue. The hardware processor is further configured to execute the software code to control the projection device to simulate a virtual-world by conforming the identified one or more virtual effects to the geometry of the real-world venue from a present vantage point of the tracked moving perspective.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: February 21, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Dane M. Coffey, Evan M. Goldberg, Steven M. Chapman, Daniel L. Baker, Matthew Deuel, Mark R. Mine
  • Patent number: 11567723
    Abstract: The present disclosure describes a method for displaying supplemental content. The method includes determining environmental characteristics, e.g., wall space, empty areas, colors, etc., for a display environment, determining supplemental content based in part on a primary content displayed by a primary display, and displaying the supplemental content in the display environment.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: January 31, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Evan M. Goldberg, Daniel L. Baker, Steven M. Chapman, Dane M. Coffey, Matthew Deuel, Mark R. Mine
  • Publication number: 20220319104
    Abstract: An image processing system includes a computing platform having processing hardware, a display, and a system memory storing a software code. The processing hardware is configured to execute the software code to receive a three-dimensional (3D) digital model, surround the 3D digital model with multiple virtual cameras oriented toward the 3D digital model, and generate, using the virtual cameras, a multiple renders of the 3D digital model. The processing hardware is further configured to execute the software code to generate a UV texture coordinate space for a surface projection of the 3D digital model, and to transfer, using the multiple renders, lighting color values for each of multiple surface portions of the 3D digital model to the UV texture coordinate space.
    Type: Application
    Filed: March 31, 2021
    Publication date: October 6, 2022
    Inventors: Dane M. Coffey, Siroberto Scerbo, Evan M. Goldberg, Christopher Richard Schroers, Daniel L. Baker, Mark R. Mine, Erika Varis Doggett
  • Publication number: 20220309737
    Abstract: An image processing system includes a computing platform having processing hardware, a display, and a system memory storing a software code. The processing hardware executes the software code to receive a digital object, surround the digital object with virtual cameras oriented toward the digital object, render, using each one of the virtual cameras, a depth map identifying a distance of that one of the virtual cameras from the digital object, and generate, using the depth map, a volumetric perspective of the digital object from a perspective of that one of the virtual cameras, resulting in multiple volumetric perspectives of the digital object. The processing hardware further executes the software code to merge the multiple volumetric perspectives of the digital object to form a volumetric representation of the digital object, and to convert the volumetric representation of the digital object to a renderable form.
    Type: Application
    Filed: March 24, 2021
    Publication date: September 29, 2022
    Inventors: Dane M. Coffey, Siroberto Scerbo, Daniel L. Baker, Mark R. Mine, Evan M. Goldberg
  • Publication number: 20220197384
    Abstract: A method includes displaying, on a touchscreen, a video comprising a video frame and determining, based on a saliency map of the video frame, a region of interest in the video frame. The method also includes detecting a touch on a region of the touchscreen while the video frame is displayed and generating a haptic response in response to determining that the region of the touchscreen overlaps with the region of interest.
    Type: Application
    Filed: December 17, 2020
    Publication date: June 23, 2022
    Inventors: Evan M. GOLDBERG, Daniel L. BAKER, Jackson A. ROGOW
  • Publication number: 20220076484
    Abstract: In one implementation, a virtual-world simulator includes a computing platform having a hardware processor and a memory storing a software code, a tracking system communicatively coupled to the computing platform, and a projection device communicatively coupled to the computing platform. The hardware processor is configured to execute the software code to obtain a map of a geometry of a real-world venue including the virtual-world simulator, to identify one or more virtual effects for display in the real-world venue, and to use the tracking system to track a moving perspective of one of a user in the real-world venue or a camera in the real-world venue. The hardware processor is further configured to execute the software code to control the projection device to simulate a virtual-world by conforming the identified one or more virtual effects to the geometry of the real-world venue from a present vantage point of the tracked moving perspective.
    Type: Application
    Filed: November 18, 2021
    Publication date: March 10, 2022
    Inventors: Dane M. Coffey, Evan M. Goldberg, Steven M. Chapman, Daniel L. Baker, Matthew Deuel, Mark R. Mine
  • Publication number: 20220051013
    Abstract: Tracking an item of interest through an environment is described. Tracking the item of interest through the environment is accomplished using passive optical tracking where an item is identified in an image of the environment using image processing and various locations of the item are tracked through the environment. When a user wants to know an item's location, a location image is projected onto the location of the item to allow a user view a current location of the item.
    Type: Application
    Filed: August 11, 2020
    Publication date: February 17, 2022
    Inventors: Evan M. GOLDBERG, Mark R. MINE, Matthew DEUEL, Daniel L. BAKER, Dane M. COFFEY, Steven M. CHAPMAN
  • Publication number: 20220026849
    Abstract: Techniques for using holographic imagery for eyeline reference for performers are disclosed. A first computer generated object is identified for display to a first performer at a designated physical position on a set. A first holographic projection of the first computer generated object is generated using a first holographic display. The first holographic display is configured to make the first holographic projection appear, to the first performer, to be located at the designated physical position on the set. One or more images of the performer are captured using an image capture device with a field of view that encompasses both the first performer and the designated physical position on the set. The captured one or more images depict the first performer and do not depict the first holographic projection. The first computer generated object is added to the captured one or more images after the capturing.
    Type: Application
    Filed: July 22, 2020
    Publication date: January 27, 2022
    Inventors: Evan M. GOLDBERG, Alexa L. HALE
  • Publication number: 20220020204
    Abstract: In one implementation, a virtual-world simulator includes a computing platform having a hardware processor and a memory storing a software code, a tracking system communicatively coupled to the computing platform, and a projection device communicatively coupled to the computing platform. The hardware processor is configured to execute the software code to obtain a map of a geometry of a real-world venue including the virtual-world simulator, to identify one or more virtual effects for display in the real-world venue, and to use the tracking system to track a moving perspective of one of a user in the real-world venue or a camera in the real-world venue. The hardware processor is further configured to execute the software code to control the projection device to simulate a virtual-world by conforming the identified one or more virtual effects to the geometry of the real-world venue from a present vantage point of the tracked moving perspective.
    Type: Application
    Filed: July 15, 2020
    Publication date: January 20, 2022
    Inventors: Dane M. Coffey, Evan M. Goldberg, Steven M. Chapman, Daniel L. Baker, Matthew Deuel, Mark R. Mine
  • Patent number: 11210843
    Abstract: In one implementation, a virtual-world simulator includes a computing platform having a hardware processor and a memory storing a software code, a tracking system communicatively coupled to the computing platform, and a projection device communicatively coupled to the computing platform. The hardware processor is configured to execute the software code to obtain a map of a geometry of a real-world venue including the virtual-world simulator, to identify one or more virtual effects for display in the real-world venue, and to use the tracking system to track a moving perspective of one of a user in the real-world venue or a camera in the real-world venue. The hardware processor is further configured to execute the software code to control the projection device to simulate a virtual-world by conforming the identified one or more virtual effects to the geometry of the real-world venue from a present vantage point of the tracked moving perspective.
    Type: Grant
    Filed: July 15, 2020
    Date of Patent: December 28, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Dane M. Coffey, Evan M. Goldberg, Steven M. Chapman, Daniel L. Baker, Matthew Deuel, Mark R. Mine
  • Publication number: 20210373834
    Abstract: The present disclosure describes a method for displaying supplemental content. The method includes determining environmental characteristics, e.g., wall space, empty areas, colors, etc., for a display environment, determining supplemental content based in part on a primary content displayed by a primary display, and displaying the supplemental content in the display environment.
    Type: Application
    Filed: April 6, 2021
    Publication date: December 2, 2021
    Inventors: Evan M. Goldberg, Daniel L. Baker, Steven M. Chapman, Dane M. Coffey, Matthew Deuel, Mark R. Mine
  • Patent number: 9390538
    Abstract: The present application includes a computer implemented method including at least two modes for analyzing a stereoscopic image corresponding to a two dimensional image. The method includes analyzing one or more layers of the two dimensional image to determine a depth pixel offset for every pixel in the two dimensional image and creating by the processing element a depth map, such as a gray scale map, by coloring every pixel a color shade based on the respective depth pixel offset for the pixel. The method further includes displaying on a display an output image corresponding to the stereoscopic image, receiving a first user selection corresponding a first depth pixel offset, determining a plurality of pixels of the output image corresponding to the first depth pixel offset, and applying a first identifier to the plurality of pixels on the output image corresponding to the first depth pixel offset.
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: July 12, 2016
    Assignee: Disney Enterprises, Inc.
    Inventors: Evan M. Goldberg, Joseph W. Longson, Robert M. Neuman, Matthew F. Schnittker, Tara Handy Turner