Patents by Inventor Steven A. Chapman

Steven A. Chapman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240109564
    Abstract: A method is provided that can include activating at least two wireless communication channels in parallel, between a first wireless transceiver and a second wireless transceiver. Each of the at least two wireless communication channels can operate at a different radio carrier frequency, and the first wireless transceiver may be part of a first vehicle. The method can also include transmitting, by the first wireless transceiver, common information in parallel on the at least two wireless communication channels to the second wireless transceiver and deactivating the at least two wireless communication channels.
    Type: Application
    Filed: December 12, 2023
    Publication date: April 4, 2024
    Inventors: Padam Dhoj Swar, Carl L. Haas, Danial Rice, Rebecca W. Dreasher, Adam Hausmann, Matthew Steven Vrba, Edward J. Kuchar, James Lucas, Andrew Ryan Staats, Jerrid D. Chapman, Jeffrey D. Kernwein, Janmejay Tripathy, Stephen Craven, Tania Lindsley, Derek K. Woo, Ann K. Grimm, Scott Sollars, Phillip A. Burgart, James Allen Oswald, Shannon K. Struttmann, Stuart J. Barr, Keith Smith, Francois P. Pretorius, Craig K. Green, Kendrick Gawne, Irwin Morris, Joseph W. Gorman, Srivallidevi Muthusami, Mahesh Babu Natarajan, Jeremiah Dirnberger, Adam Franco
  • Patent number: 11936842
    Abstract: An immersive experience system is provided. The immersive experience system has a processor that determines a position of a first head-mounted display. Further, the processor determines a position of a second head-mounted display. The processor also generates a first image for a first immersive experience corresponding to the position of the first head-mounted display. Moreover, the process encodes the first image into a first infrared spectrum illumination having a first wavelength. In addition, the processor generates a second image for a second immersive experience corresponding to the position of the second head-mounted display. Finally, the processor encodes the second image into a second infrared spectrum illumination having a second wavelength. The first wavelength is distinct from the second wavelength.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: March 19, 2024
    Assignee: Disney Enterprises, Inc.
    Inventors: Steven Chapman, Joseph Popp, Alice Taylor, Joseph Hager
  • Publication number: 20230408162
    Abstract: In one aspect, systems for storing and/or transporting a payload are described herein. In some embodiments, systems have a portable storage vessel having an internal cavity, the internal cavity having: a central zone for receiving the payload; a first cooling zone disposed radially outward from the central recess, with a first phase change material disposed in the first cooling zone; a second cooling zone disposed radially outward from the first cooling zone, with a second phase change material disposed in the second cooling zone, wherein the first phase change material has a first phase transition temperature; wherein the second phase change material has a second phase transition temperature; and wherein the second phase transition temperature is between 10° C. and 15° C. higher than the first phase transition temperature; and wherein each of the first phase transition temperature and the second phase transition temperature are below 0° C.
    Type: Application
    Filed: November 15, 2021
    Publication date: December 21, 2023
    Applicant: Phase Change Energy Solutions, Inc.
    Inventors: Reyad I. Sawafta, Venu Gopal R. Kuturu, Brian Steven Chapman
  • Patent number: 11803885
    Abstract: A process generates a certificate of authenticity for a virtual item. Further, the process sends, with the processor, the certificate of authenticity to a decentralized network of computing devices such that two or more of the computing devices store the certificate of authenticity. The two or more of the computing devices receive, from a user device that provides a virtual reality experience in which a virtual item is purchased, a request for authentication of the certificate of authenticity. In addition, the two or more computing devices authenticate the certificate of authenticity based on one or more consistency criteria for the certificate of authenticity being met by the two or more computing devices.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: October 31, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Steven Chapman, Edwin Rosero, Mehul Patel, Joseph Popp, Calis Agyemang
  • Patent number: 11610239
    Abstract: Systems and methods for providing machine-learning enabled user-specific evaluations are disclosed. Implementations include obtaining a first set of evaluation data from a user interface, obtaining a first set of target-descriptive data including target-specific characteristics objectively describing the evaluation targets, and training, with a machine-learning algorithm, a user-specific evaluation profile indicating evaluation patterns relative to the first set of evaluation data and the first set of target-specific characteristics. Implementations include applying the user-specific evaluation profile to a second set of target-descriptive data to predict a user-specific evaluation.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: March 21, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Steven Chapman, Mehul Patel, Joseph Popp, Benjamin Havey
  • Publication number: 20220417489
    Abstract: The disclosure is directed a method including capturing image data of a real-world environment at a first location and a second location different than the first location. The image data includes a plurality of intersecting image data points captured at both the first location and the second location, and a representation of an area occluded by an obstacle from a point of view of a user and; generating a synthesized field of view (“FOV”) that includes the plurality of intersecting image data points such that the synthesized FOV includes an unobstructed view of the area occluded by the obstacle from the point of view of the user; and rendering the synthesized FOV as a visual display.
    Type: Application
    Filed: July 8, 2022
    Publication date: December 29, 2022
    Inventors: Steven CHAPMAN, Mark ARANA, Michael GOSLIN, Joseph POPP, Mehul PATEL
  • Publication number: 20220277421
    Abstract: In one embodiment, a method includes receiving a pair of stereo images having a resolution lower than a target resolution, generating an initial first feature map for a first image of the pair based on first channels associated with the first image and generating an initial second feature map for a second image of the pair based on second channels associated with the second image, generating a first feature map based on combining the first channels with the initial first feature map, generating a second feature map based on combining the second channels with the initial second feature map, up-sampling the first feature map and the second feature map to the target resolution, warping the up-sampled second feature map, and generating a reconstructed image corresponding to the first image having the target resolution based on the up-sampled first feature map and the up-sampled and warped second feature map.
    Type: Application
    Filed: May 16, 2022
    Publication date: September 1, 2022
    Inventors: Lei Xiao, Salah Eddine Nouri, Douglas Robert Lanman, Anton S Kaplanyan, Alexander Jobe Fix, Matthew Steven Chapman
  • Patent number: 11394949
    Abstract: The disclosure is directed to providing operator visibility through an object that occludes the view of the operator by using a HMD system in communication with one or more imaging devices. A method of doing so may include: using one or more imaging devices coupled to an exterior of an object to capture image data of a real-world environment surrounding the object: calculating an orientation of a HMD positioned in an interior of the object; using at least the calculated orientation of the HMD, synthesizing a FOV of the image data with a FOV of the HMD; and rendering the synthesized FOV of the image data to the HMD. In some instances, the captured image data is used to create a three-dimensional model of the real-world environment, and synthesizing the FOV of the image data includes synthesizing a FOV of the 3D model with the FOV of the HMD.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: July 19, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: Steven Chapman, Mark Arana, Michael Goslin, Joseph Popp, Mehul Patel
  • Patent number: 11367165
    Abstract: In one embodiment, a method includes receiving a first frame associated with a first time and one or more second frames of a video having a resolution lower than a target resolution, wherein each second frame is associated with a second time prior to the first time, generating a first feature map for the first frame and one or more second feature maps for the one or more second frames, up-sampling the first feature map and the one or more second feature maps to the target resolution, warping each of the up-sampled second feature maps according to a motion estimation between the associated second time and the first time, and generating a reconstructed frame having the target resolution corresponding to the first frame by using a machine-learning model to process the up-sampled first feature map and the one or more up-sampled and warped second feature maps.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: June 21, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Lei Xiao, Salah Eddine Nouri, Douglas Robert Lanman, Anton S Kaplanyan, Alexander Jobe Fix, Matthew Steven Chapman
  • Publication number: 20210366082
    Abstract: In one embodiment, a method includes receiving a first frame associated with a first time and one or more second frames of a video having a resolution lower than a target resolution, wherein each second frame is associated with a second time prior to the first time, generating a first feature map for the first frame and one or more second feature maps for the one or more second frames, up-sampling the first feature map and the one or more second feature maps to the target resolution, warping each of the up-sampled second feature maps according to a motion estimation between the associated second time and the first time, and generating a reconstructed frame having the target resolution corresponding to the first frame by using a machine-learning model to process the up-sampled first feature map and the one or more up-sampled and warped second feature maps.
    Type: Application
    Filed: September 30, 2020
    Publication date: November 25, 2021
    Inventors: Lei Xiao, Salah Eddine Nouri, Douglas Robert Lanman, Anton S. Kaplanyan, Alexander Jobe Fix, Matthew Steven Chapman
  • Publication number: 20210352257
    Abstract: An immersive experience system is provided. The immersive experience system has a processor that determines a position of a first head-mounted display. Further, the processor determines a position of a second head-mounted display. The processor also generates a first image for a first immersive experience corresponding to the position of the first head-mounted display. Moreover, the process encodes the first image into a first infrared spectrum illumination having a first wavelength. In addition, the processor generates a second image for a second immersive experience corresponding to the position of the second head-mounted display. Finally, the processor encodes the second image into a second infrared spectrum illumination having a second wavelength. The first wavelength is distinct from the second wavelength.
    Type: Application
    Filed: July 19, 2021
    Publication date: November 11, 2021
    Inventors: Steven Chapman, Joseph Popp, Alice Taylor, Joseph Hager
  • Patent number: 11113794
    Abstract: In one embodiment, a computing system may receive current eye-tracking data associated with a user of a head-mounted display. The system may dynamically adjust a focal length of the head-mounted display based on the current eye-tracking data. The system may generate an in-focus image of a scene and a corresponding depth map of the scene. The system may generate a circle-of-confusion map for the scene based on the depth map. The circle-of-confusion map encodes a desired focal surface in the scene. The system may generate, using a machine-learning model, an output image with a synthesized defocus-blur effect by processing the in-focus image, the corresponding depth map, and the circle-of-confusion map of the scene. The system may display the output image with the synthesized defocus-blur effect to the user via the head-mounted display having the adjusted focal length.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: September 7, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Douglas Robert Lanman, Matthew Steven Chapman, Alexander Jobe Fix, Anton S. Kaplanyan, Lei Xiao
  • Patent number: 11094075
    Abstract: In one embodiment, a system may access a training sample that includes training images and corresponding training depth maps of a scene, with the training images being associated with different predetermined viewpoints of the scene. The system may generate elemental images of the scene by processing the training images and the training depth maps using a machine-learning model. The elemental images are associated with more viewpoints of the scene than the predetermined viewpoints associated with the training images. The system may update the machine-learning model based on a comparison between the generated elemental images of the scene and target elemental images that are each associated with a predetermined viewpoint. The updated machine-learning model is configured to generate elemental images of a scene of interest based on input images and corresponding depth maps of the scene of interest from different viewpoints.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: August 17, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Douglas Robert Lanman, Matthew Steven Chapman, Alexander Jobe Fix, Anton S. Kaplanyan, Lei Xiao
  • Patent number: 11070786
    Abstract: An immersive experience system is provided. The immersive experience system has a processor that determines a position of a first head-mounted display. Further, the processor determines a position of a second head-mounted display. The processor also generates a first image for a first immersive experience corresponding to the position of the first head-mounted display. Moreover, the process encodes the first image into a first infrared spectrum illumination having a first wavelength. In addition, the processor generates a second image for a second immersive experience corresponding to the position of the second head-mounted display. Finally, the processor encodes the second image into a second infrared spectrum illumination having a second wavelength. The first wavelength is distinct from the second wavelength.
    Type: Grant
    Filed: May 2, 2019
    Date of Patent: July 20, 2021
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Steven Chapman, Joseph Popp, Alice Taylor, Joseph Hager
  • Patent number: 10904449
    Abstract: Systems and methods described herein are directed to capturing intrinsic color images of subjects. A camera may be equipped with a light source that is coaxial to the camera's image sensor and configured to emit a pulse of light of short duration. During image capture of a subject, the camera light source may emit the pulse of light through the lens barrel of the camera and stop emission of light before the reflected light from the light source returns. Thereafter, the camera lens receives the reflected light from the light source (with the light source no longer emitting light) and charge is collected at one or more image sensor photodetector sites of the camera.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: January 26, 2021
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Steven Chapman, Mehul Patel, Joseph Popp, Calis Agyemang, Joseph Hager
  • Patent number: 10885657
    Abstract: A process determines a position of an image capture device with respect to a physical object. The position corresponds to a vantage point for an initial image capture of the physical object performed by the image capture device at a first time. Further, the process generates an image corresponding to the position. In addition, the process displays the image on the image capture device. Finally, the process outputs one or more feedback indicia that direct a user to orient the image capture device to the image for a subsequent image capture at a second time within a predetermined tolerance threshold of the vantage point.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: January 5, 2021
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Steven Chapman, Mehul Patel, Joseph Popp, Alice Taylor
  • Patent number: 10832380
    Abstract: Systems and methods for correcting color of uncalibrated material is disclosed. Example embodiments include a system to correct color of uncalibrated material. The system may include a non-transitory computer-readable medium operatively coupled to processors. The non-transitory computer-readable medium may store instructions that, when executed cause the processors to perform a number of operations. One operation is to obtain a target image of a degraded target material with one or more objects. The degraded target material comprises degraded colors and light information corresponding to light sources in the degraded target material. Another operations is to obtain color reference data. Another operation is to identify an object in the target image that corresponds to the color reference data. Yet another operation is to correct the identified object in the target image. Another operation is to correct the target image.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: November 10, 2020
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Steven Chapman, Mehul Patel, Joseph Popp, Ty Popko, Erika Doggett
  • Publication number: 20200351486
    Abstract: An immersive experience system is provided. The immersive experience system has a processor that determines a position of a first head-mounted display. Further, the processor determines a position of a second head-mounted display. The processor also generates a first image for a first immersive experience corresponding to the position of the first head-mounted display. Moreover, the process encodes the first image into a first infrared spectrum illumination having a first wavelength. In addition, the processor generates a second image for a second immersive experience corresponding to the position of the second head-mounted display. Finally, the processor encodes the second image into a second infrared spectrum illumination having a second wavelength. The first wavelength is distinct from the second wavelength.
    Type: Application
    Filed: May 2, 2019
    Publication date: November 5, 2020
    Inventors: Steven Chapman, Joseph Popp, Alice Taylor, Joseph Hager
  • Patent number: 10818097
    Abstract: A user control apparatus has a laser emitter that emits a laser beam in a real-world environment. Further, the user control apparatus has an optical element that receives the laser beam and generates a plurality of laser beams such that a starting point and a plurality of endpoints, each corresponding to one of the plurality of laser beams, form a laser frustum. In addition, the user control apparatus has an image capture device that captures an image of a shape of the laser frustum based on a reflection of the plurality of laser beams from an object in the real-world environment so that a spatial position of the object in the real-world environment is determined for an augmented reality or virtual reality user experience.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: October 27, 2020
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Steven Chapman, Joseph Hager, Joseph Popp, Calis Agyemang, Mehul Patel
  • Publication number: 20200311881
    Abstract: In one embodiment, a computing system may receive current eye-tracking data associated with a user of a head-mounted display. The system may dynamically adjust a focal length of the head-mounted display based on the current eye-tracking data. The system may generate an in-focus image of a scene and a corresponding depth map of the scene. The system may generate a circle-of-confusion map for the scene based on the depth map. The circle-of-confusion map encodes a desired focal surface in the scene. The system may generate, using a machine-learning model, an output image with a synthesized defocus-blur effect by processing the in-focus image, the corresponding depth map, and the circle-of-confusion map of the scene. The system may display the output image with the synthesized defocus-blur effect to the user via the head-mounted display having the adjusted focal length.
    Type: Application
    Filed: June 16, 2020
    Publication date: October 1, 2020
    Inventors: Douglas Robert Lanman, Matthew Steven Chapman, Alexander Jobe Fix, Anton S. Kaplanyan, Lei Xiao