Patents by Inventor Steven M. Chapman

Steven M. Chapman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10898798
    Abstract: An air flow generator may be implemented on an augmented reality (AR) or virtual reality (VR) controller or head-mounted display (HMD) through which an AR or VR experience is presented. Based on content upon which the AR or VR experience is based, air flow effects can be provided by the air flow generator. In particular, desired air flow effect parameters based on or obtained from the content, can be used to enhance the AR or VR experience through generating air flow directed at a user of the HMD. The air flow generated by the air flow generator can be further enhanced by the addition of liquid and/or scented additives.
    Type: Grant
    Filed: December 26, 2017
    Date of Patent: January 26, 2021
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Steven M. Chapman, Javier Soto, Mehul Patel, Joseph Popp, Calis Agyemang
  • Publication number: 20200402264
    Abstract: According to one implementation, a system for validating media content includes a computing platform having a hardware processor and a system memory storing a media content validation software code. The hardware processor is configured to execute the media content validation software code to search the media content for a geometrically encoded metadata structure. When the geometrically encoded metadata structure is detected, the hardware processor is further configured to execute the media content validation software code to identify an original three-dimensional (3D) geometry of the detected geometrically encoded metadata structure, to extract metadata from the detected geometrically encoded metadata structure, decode the metadata extracted from the detected geometrically encoded metadata structure based on the identified original 3D geometry, and obtain a validation status of the media content based on the decoded metadata.
    Type: Application
    Filed: June 21, 2019
    Publication date: December 24, 2020
    Inventors: Steven M. Chapman, Todd P. Swanson, Mehul Patel, Joseph Popp, Ty Popko
  • Patent number: 10832469
    Abstract: Methods, systems, and computer readable media related to generating a three-dimensional model of a target object. A first source image, of a plurality of source images of a target object, is analyzed to identify a first region of the first image, the first region having attributes meeting one or more pre-defined criteria. The first region is marked for exclusion from use in generating a three-dimensional model of the target object. The three-dimensional model of the target object is generated using the plurality of source images. The marked first region is excluded in the generation of the three-dimensional model.
    Type: Grant
    Filed: August 6, 2018
    Date of Patent: November 10, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Steven M. Chapman, Steven T. Kosakura, Joseph M. Popp, Mehul A. Patel
  • Patent number: 10775838
    Abstract: Some implementations of the disclosure are directed to automatically rotating displays to display media content based on metadata extracted from the media content that provides an indication of a target display orientation to display the media content. In one implementation, a method includes: detecting media content for display on a display, wherein the display is mounted on a rotatable display mount; extracting metadata from the detected media content, the extracted metadata providing an indication of a target display orientation to display the media content; using at least the extracted metadata, automatically causing the rotatable display mount to rotate the display to the target orientation; and displaying the media content on the rotated display.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: September 15, 2020
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Mehul Patel, Steven M. Chapman, Joseph Popp, Matthew Deuel
  • Patent number: 10776987
    Abstract: The disclosure provides for a system of markers and methods of using the system of markers to provide a precise scene scale reference for captured aerial images. Each of the markers may include one or more pairs of aligned collimated light emitters, where each pair of light emitters is configured to emit two light beams that converge at a known distance from the marker. When two or more markers are used, the system of markers may be aligned in a unique physical orientation to form a shape of known dimensions (e.g., a line, a triangle, or square) that provides an accurate scene scale reference for captured images.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: September 15, 2020
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Steven M. Chapman, Mehul Patel, Joseph Popp, Steven Kosakura
  • Patent number: 10728494
    Abstract: According to one implementation, a system for differentially transforming video content includes a computing platform having a hardware processor and a system memory storing a video reformatting software code. The hardware processor executes the video reformatting software code to receive an input video file including video content formatted for a first set of coordinates, and to detect one or more principle features depicted in the video content based on a predetermined principle feature identification data corresponding to the video content. The hardware processor further executes the video reformatting software code to differentially map the video content to a second set of coordinates to produce a reformatted video content. The resolution of the one or more principle features is enhanced relative to other features depicted in the reformatted video content.
    Type: Grant
    Filed: February 20, 2017
    Date of Patent: July 28, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Christopher S. Taylor, Steven M. Chapman
  • Patent number: 10728430
    Abstract: Systems and methods for displaying object features via an AR device are disclosed. The method may include receiving, from a transmitter, a signal from an object on a movie set. The signal may specify object data. The object may be a prop. The object data may specify one or more features corresponding to the prop. The method may include generating, with the one or more physical computer processors and the one or more AR components, a representation of the one or more features using visual effects to depict at least some of the object data. The method may include displaying, via the AR device, the representation onto a view of the prop.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: July 28, 2020
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Mark R. Mine, Steven M. Chapman, Alexa L. Hale, Calis O. Agyemang, Joseph M. Popp
  • Publication number: 20200225696
    Abstract: Some implementations of the disclosure are directed to automatically rotating displays to display media content based on metadata extracted from the media content that provides an indication of a target display orientation to display the media content. In one implementation, a method includes: detecting media content for display on a display, wherein the display is mounted on a rotatable display mount; extracting metadata from the detected media content, the extracted metadata providing an indication of a target display orientation to display the media content; using at least the extracted metadata, automatically causing the rotatable display mount to rotate the display to the target orientation; and displaying the media content on the rotated display.
    Type: Application
    Filed: January 11, 2019
    Publication date: July 16, 2020
    Applicant: Disney Enterprises, Inc.
    Inventors: Mehul Patel, Steven M. Chapman, Joseph Popp, Matthew Deuel
  • Patent number: 10713831
    Abstract: There are provided systems and methods for providing event enhancement using augmented reality (AR) effects. In one implementation, such a system includes a computing platform having a hardware processor and a memory storing an AR effect generation software code. The hardware processor is configured to execute the AR effect generation software code to receive a venue description data corresponding to an event venue, to identify the event venue based on the venue description data, and to identify an event scheduled to take place at the event venue. The hardware processor is further configured to execute the AR effect generation software code to generate one or more AR enhancement effect(s) based on the event and the event venue, and to output the AR enhancement effect(s) for rendering on a display of a wearable AR device during the event.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: July 14, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Mark Arana, Steven M. Chapman, Michael DeValue, Michael P. Goslin
  • Publication number: 20200201422
    Abstract: One or more embodiments of the present disclosure include a system for providing dynamic virtual reality ground effects. The system includes a user interface surface and multiple motors coupled to the user interface surface. At least one of the motors is coupled to a virtual reality component of an electronic device. A first motor of the multiple motors is driven by movement of the user interface surface and is used to generate a feedback electrical signal in response to the movement of the user interface surface. A second motor of the multiple motors is driven using the feedback electrical signal.
    Type: Application
    Filed: December 19, 2018
    Publication date: June 25, 2020
    Applicant: Disney Enterprises, Inc.
    Inventors: Steven M. Chapman, Joseph Popp, Alice Taylor, Samy Segura, Mehul Patel
  • Patent number: 10671154
    Abstract: One or more embodiments of the present disclosure include a system for providing dynamic virtual reality ground effects. The system includes a user interface surface and multiple motors coupled to the user interface surface. At least one of the motors is coupled to a virtual reality component of an electronic device. A first motor of the multiple motors is driven by movement of the user interface surface and is used to generate a feedback electrical signal in response to the movement of the user interface surface. A second motor of the multiple motors is driven using the feedback electrical signal.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: June 2, 2020
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Steven M. Chapman, Joseph Popp, Alice Taylor, Samy Segura, Mehul Patel
  • Patent number: 10665023
    Abstract: Systems, devices, and methods are disclosed for generating improved AR content. An electronic device includes circuitry coupled to a memory storing instructions that, when executed, cause the circuitry to obtain frame data for a frame captured using a camera. The frame data includes a level of focus for one or more frame objects in the frame. The circuitry is caused to associate an augmented reality object with at least one of the one or more frame objects. The circuitry is caused to determine a fit factor between a level of focus of the augmented reality object and level of focus of the at least one of the frame objects associated with the augmented reality object. Additionally, the circuitry is caused to, if the fit factor does not satisfy a threshold, apply a decrease to the level of focus of the augmented reality object in order to generate an increased fit factor.
    Type: Grant
    Filed: December 26, 2017
    Date of Patent: May 26, 2020
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Mehul Patel, Steven M. Chapman, Benjamin F. Havey, Joseph Popp
  • Patent number: 10650712
    Abstract: Systems and methods are provided for presenting visual media on a structure having a plurality of unordered light sources, e.g., fiber optic light sources, light emitting diodes (LEDs), etc. Visual media can be created based on a computer model of the structure. Images of the structure can be analyzed to determine the location of each of the light sources. A lookup table can be generated based on the image analysis, and used to correlate pixels of the visual media to one or more of the actual light sources. A visual media artist or designer need not have prior knowledge of the order/layout of the light sources on the structure in order to create visual media to be presented thereon.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: May 12, 2020
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Steven M. Chapman, Joseph Popp, Mehul Patel
  • Patent number: 10616547
    Abstract: A pixel projection component for projecting a multi-vantage point light-field includes a distal layer with a plurality of apertures disposed therein, each of the plurality of apertures traversing the distal layer between a convex outer surface and a concave inner surface, and an intermediate layer comprising a plurality of light guides, the intermediate layer being mechanically coupled to the distal layer such that a proximal end of each light guide of the plurality of light guides is oriented to accept light transmissions from a corresponding light source and a distal end of each light guide is oriented to transmit the light transmission through a corresponding aperture of the plurality of apertures, such that the light is directed to targeted vantage points corresponding to the corresponding aperture, and causing different light transmissions to be transmitted to different vantage points.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: April 7, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Steven M. Chapman, Joseph Popp, Steven Kosakura, James Bumgardner, Mehul Patel
  • Publication number: 20200086208
    Abstract: Systems, methods, and devices are disclosed for tracking physical objects using a passive reflective object. A computer-implemented method includes obtaining a location profile derived from content capturing a passive object having a reflective surface reflecting one or more real-world objects. The passive object is attached to a physical object. The method further includes transmitting the location profile to a simulation device. The method further includes generating a virtual representation of the physical object based on the location profile of the passive object. The method further includes presenting the virtual representation in a simulation experience.
    Type: Application
    Filed: September 17, 2018
    Publication date: March 19, 2020
    Applicant: Disney Enterprises, Inc.
    Inventor: Steven M. Chapman
  • Publication number: 20200067939
    Abstract: The present application discloses computing platforms and methods for performing location-based restriction of content transmission. In one implementation, such a computing platform includes a hardware processor and a memory storing a content protection software code. The hardware processor is configured to execute the content protection software code to obtain a media content including a cue for restricting broadcast of the media content, and to detect the cue in the media content. The hardware processor is further configured to execute the content protection software code to interpret the cue to identify a usage rule constraining the broadcast of the media content, and to restrict the broadcast of the media content based on the usage rule.
    Type: Application
    Filed: August 24, 2018
    Publication date: February 27, 2020
    Inventors: Mark Arana, Steven M. Chapman
  • Publication number: 20200043235
    Abstract: There are provided systems and methods for performing image compensation using image enhancement effects. In one implementation, such a system includes a computing platform having a hardware processor and a memory storing an image compensation software code. The hardware processor is configured to execute the image compensation software code to receive image data corresponding to an event being viewed by a viewer in a venue, the image data obtained by a wearable augmented reality (AR) device worn by the viewer, and to detect a deficiency in an image included in the image data. The hardware processor is further configured to execute the image compensation software code to generate one or more image enhancement effect(s) for compensating for the deficiency in the image and to output the image enhancement effect(s) for rendering on a display of the wearable AR device while the viewer is viewing the event.
    Type: Application
    Filed: August 3, 2018
    Publication date: February 6, 2020
    Inventors: Steven M. Chapman, Todd P. Swanson, Joseph Popp, Samy Segura, Mehul Patel
  • Publication number: 20200043227
    Abstract: Methods, systems, and computer readable media related to generating a three-dimensional model of a target object. A first source image, of a plurality of source images of a target object, is analyzed to identify a first region of the first image, the first region having attributes meeting one or more pre-defined criteria. The first region is marked for exclusion from use in generating a three-dimensional model of the target object. The three-dimensional model of the target object is generated using the plurality of source images. The marked first region is excluded in the generation of the three-dimensional model.
    Type: Application
    Filed: August 6, 2018
    Publication date: February 6, 2020
    Inventors: Steven M. CHAPMAN, Steven T. KOSAKURA, Joseph M. POPP, Mehul A. PATEL
  • Publication number: 20200027257
    Abstract: There are provided systems and methods for providing event enhancement using augmented reality (AR) effects. In one implementation, such a system includes a computing platform having a hardware processor and a memory storing an AR effect generation software code. The hardware processor is configured to execute the AR effect generation software code to receive a venue description data corresponding to an event venue, to identify the event venue based on the venue description data, and to identify an event scheduled to take place at the event venue. The hardware processor is further configured to execute the AR effect generation software code to generate one or more AR enhancement effect(s) based on the event and the event venue, and to output the AR enhancement effect(s) for rendering on a display of a wearable AR device during the event.
    Type: Application
    Filed: June 10, 2019
    Publication date: January 23, 2020
    Inventors: Mark Arana, Steven M. Chapman, Michael DeValue, Michael P. Goslin
  • Publication number: 20200005688
    Abstract: Systems and methods are provided for presenting visual media on a structure having a plurality of unordered light sources, e.g., fiber optic light sources, light emitting diodes (LEDs), etc. Visual media can be created based on a computer model of the structure. Images of the structure can be analyzed to determine the location of each of the light sources. A lookup table can be generated based on the image analysis, and used to correlate pixels of the visual media to one or more of the actual light sources. A visual media artist or designer need not have prior knowledge of the order/layout of the light sources on the structure in order to create visual media to be presented thereon.
    Type: Application
    Filed: June 29, 2018
    Publication date: January 2, 2020
    Applicant: Disney Enterprises, Inc.
    Inventors: Steven M. Chapman, Joseph Popp, Mehul Patel