Patents by Inventor Arthur Van Hoff

Arthur Van Hoff has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10819970
    Abstract: A camera system is configured to capture video content with 360 degree views of an environment. The camera array comprises a housing including a first quadrant, a second quadrant, a third quadrant, and a fourth quadrant, wherein each of the first quadrant, the second quadrant, the third quadrant, and the fourth quadrant form a plurality of apertures, a chassis bottom that is removably coupled to the housing, and a plurality of camera modules, each camera module comprising a processor, a memory, a sensor, and a lens, wherein each of the camera modules is removably coupled to one of the plurality of apertures in the housing, wherein the first quadrant, the second quadrant, the third quadrant, and the fourth quadrant each include a subset of the plurality of camera modules. Each of the plurality of camera modules includes a heat sink.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: October 27, 2020
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Arthur Van Hoff, Thomas M. Annau, Jens Christensen, Koji Gardiner, Punit Govenji, James Dunn
  • Publication number: 20200312011
    Abstract: An illustrative volumetric capture system accesses a machine learning model associated with bodies of a particular body type, as well as a two-dimensional (2D) image captured by a capture device located at a real-world scene. The 2D image depicts a body of the particular body type that is present at the real-world scene. Using the machine learning model and based on the 2D image, the volumetric capture system identifies a 2D joint location, from a perspective of the capture device, of a particular joint of the body. The volumetric capture system also generates a three-dimensional (3D) reference model of the body that represents the particular joint of the body at a 3D joint location that is determined based on the 2D joint location identified using the machine learning model. Corresponding methods and systems are also disclosed.
    Type: Application
    Filed: March 26, 2020
    Publication date: October 1, 2020
    Inventors: Daniel Kopeinigg, Andrew Walkingshaw, Arthur van Hoff, Charles LePere, Christopher Redmann, Philip Lee, Solmaz Hajmohammadi, Sourabh Khire, Simion Venshtain
  • Publication number: 20200275079
    Abstract: An illustrative method includes at least one processor receiving video data that describes a set of images captured by a set of camera modules of a camera array, stitching the set of images together, based on relative positions of the camera modules, to generate three-dimensional (3D) video content, and correcting lens distortion to remove one or more lens distortion effects from the set of images or the 3D video content. Corresponding methods and systems are described.
    Type: Application
    Filed: May 8, 2020
    Publication date: August 27, 2020
    Inventors: Arthur van Hoff, Thomas M. Annau, Jens Christensen, Koji Gardiner
  • Patent number: 10708568
    Abstract: The disclosure includes a system and method for generating virtual reality content. For example, the disclosure includes a method for generating virtual reality content that includes a stream of three-dimensional video data and a stream of three-dimensional audio data with a processor-based computing device programmed to perform the generating, providing the virtual reality content to a user, detecting a location of the user's gaze at the virtual reality content, and suggesting an advertisement based on the location of the user's gaze. Another example includes receiving virtual reality content that includes a stream of three-dimensional video data and a stream of three-dimensional audio data to a first user with a processor-based computing device programmed to perform the receiving, generating a social network for the first user, and generating a social graph that includes user interactions with the virtual reality content.
    Type: Grant
    Filed: August 21, 2014
    Date of Patent: July 7, 2020
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Jens Christensen, Thomas M. Annau, Arthur Van Hoff
  • Patent number: 10701426
    Abstract: The disclosure includes a system and method for receiving viewing data that describes a location of a first user's gaze while viewing virtual reality content. The method also determining an object of interest in the virtual reality content based on the location of the first user's gaze. The method also includes generating a social network that includes the first user as a member of the social network. The method also includes performing an action in the social network related to the object of interest.
    Type: Grant
    Filed: September 1, 2015
    Date of Patent: June 30, 2020
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Arthur van Hoff, Thomas M. Annau, Jens Christensen
  • Patent number: 10694167
    Abstract: The disclosure includes a camera array comprising camera modules, the camera modules comprising a master camera that includes a processor, a memory, a sensor, a lens, a status indicator, and a switch, the switch configured to instruct each of the camera modules to initiate a start operation to start recording video data using the lens and the sensor in the other camera modules and the switch configured to instruct each of the camera modules to initiate a stop operation to stop recording, the status indicator configured to indicate a status of at least one of the camera modules. Lens distortion effects may be removed from the frames described by the video data. The camera modules of the camera array are configured to provide a 3× field of view overlap.
    Type: Grant
    Filed: December 12, 2018
    Date of Patent: June 23, 2020
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Arthur Van Hoff, Thomas M. Annau, Jens Christensen, Koji Gardiner
  • Patent number: 10691202
    Abstract: The disclosure includes a system and method for receiving viewing data that describes a location of a first user's gaze while viewing virtual reality content. The method also determining an object of interest in the virtual reality content based on the location of the first user's gaze. The method also includes generating a social network that includes the first user as a member of the social network. The method also includes performing an action in the social network related to the object of interest.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: June 23, 2020
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Arthur van Hoff, Thomas M. Annau, Jens Christensen
  • Publication number: 20200195906
    Abstract: The disclosure includes a camera array comprising camera modules, the camera modules comprising a master camera that includes a processor, a memory, a sensor, a lens, a status indicator, and a switch, the switch configured to instruct each of the camera modules to initiate a start operation to start recording video data using the lens and the sensor in the other camera modules and the switch configured to instruct each of the camera modules to initiate a stop operation to stop recording, the status indicator configured to indicate a status of at least one of the camera modules. Lens distortion effects may be removed from the frames described by the video data. The camera modules of the camera array are configured to provide a 3× field of view overlap.
    Type: Application
    Filed: December 12, 2018
    Publication date: June 18, 2020
    Inventors: Arthur Van Hoff, Thomas M. Annau, Jens Christensen, Koji Gardiner
  • Publication number: 20200184601
    Abstract: Illustrative methods and systems for generating virtual reality content based on corrections to stitching errors are described. An illustrative computer-implemented method includes receiving raw virtual reality video data representing images recorded by camera modules of a camera array, stitching the images together to generate an initial virtual reality render, determining that the initial virtual reality render has a stitching error, and generating a corrected virtual reality render that corrects the stitching error.
    Type: Application
    Filed: February 13, 2020
    Publication date: June 11, 2020
    Inventors: Olaf Brandt, Anatoli Adamov, Arthur Van Hoff
  • Patent number: 10666921
    Abstract: The disclosure includes a system and method for generating virtual reality content. For example, the disclosure includes a method for generating virtual reality content that includes a stream of three-dimensional video data and a stream of three-dimensional audio data with a processor-based computing device programmed to perform the generating, providing the virtual reality content to a user, detecting a location of the user's gaze at the virtual reality content, and suggesting an advertisement based on the location of the user's gaze. Another example includes receiving virtual reality content that includes a stream of three-dimensional video data and a stream of three-dimensional audio data to a first user with a processor-based computing device programmed to perform the receiving, generating a social network for the first user, and generating a social graph that includes user interactions with the virtual reality content.
    Type: Grant
    Filed: May 2, 2017
    Date of Patent: May 26, 2020
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Jens Christensen, Thomas M. Annau, Arthur Van Hoff
  • Patent number: 10665261
    Abstract: The disclosure includes a camera array comprising camera modules, the camera modules comprising a master camera that includes a processor, a memory, a sensor, a lens, a status indicator, and a switch, the switch configured to instruct each of the camera modules to initiate a start operation to start recording video data using the lens and the sensor in the other camera modules and the switch configured to instruct each of the camera modules to initiate a stop operation to stop recording, the status indicator configured to indicate a status of at least one of the camera modules. The camera modules of the camera array are configured to provide a 3× field of view overlap.
    Type: Grant
    Filed: January 3, 2019
    Date of Patent: May 26, 2020
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Arthur Van Hoff, Thomas M. Annau, Jens Christensen, Koji Gardiner
  • Patent number: 10652314
    Abstract: Cloud-based virtual reality content processing methods and systems are disclosed. In some embodiments, raw virtual reality video data recorded by a virtual reality camera system may be received at a cloud based server through a network interface. The raw virtual reality video data may be processed to generate one or more virtual reality renders. A first virtual reality content may be processed in a first virtual reality format from the one or more virtual reality renders. A second virtual reality content may be processed in a second virtual reality format from the one or more virtual reality renders. The first virtual reality content and the second virtual reality content may be provided for download through a network interface.
    Type: Grant
    Filed: May 10, 2017
    Date of Patent: May 12, 2020
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Anatoli D. Adamov, Arthur van Hoff, Adam P. Gaige, Chris P. Redmann, Sarah E. Parks
  • Publication number: 20200134911
    Abstract: An exemplary three-dimensional (3D) simulation system accesses a two-dimensional (2D) video image captured by a video capture device and that depicts a bounded real-world scene and a real-world object present within the bounded real-world scene. The 3D simulation system accesses respective 3D models of the bounded real-world scene and the real-world object. Based on the 2D video image, the 3D simulation system tracks a spatial characteristic of the real-world object relative to the bounded real-world scene. Based on the tracked spatial characteristic of the real-world object and the 3D models of the bounded real-world scene and the real-world object, the 3D simulation system generates a 3D simulation of the bounded real-world scene within which the real-world object is simulated in accordance with the tracked spatial characteristic of the real-world object. Corresponding methods and systems are also disclosed.
    Type: Application
    Filed: October 28, 2019
    Publication date: April 30, 2020
    Inventors: Arthur van Hoff, Daniel Kopeinigg, Philip Lee, Solmaz Hajmohammadi, Sourabh Khire, Simion Venshtain
  • Patent number: 10600155
    Abstract: Some embodiments of the invention include methods and systems to generating virtual reality content based on corrections to stitching errors. The method includes receiving at a cloud-based server through a network interface, raw virtual reality video data recorded by camera modules of a camera array. The method further includes stitching the raw virtual reality video data, at the cloud-based server, to generate an initial virtual reality render. The method further includes determining that the initial virtual reality render has stitching errors. The method further includes transmitting the initial virtual reality render from the cloud-based server to a user device. The method further includes receiving a correction to the initial virtual reality render from the user device. The method further includes generating virtual reality content based on the correction.
    Type: Grant
    Filed: February 9, 2018
    Date of Patent: March 24, 2020
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Olaf Brandt, Anatoli Adamov, Arthur Van Hoff
  • Publication number: 20200014987
    Abstract: Systems and methods are disclosed to receive a request for a virtual reality render project that includes information specifying virtual reality video data to be used to create a virtual reality render; determine a plurality of virtual reality jobs required to create the virtual reality render from the virtual reality video data; determine the availability of a plurality of virtual reality nodes across the network; create a virtual reality render map that specifies a processing sequence of the plurality of virtual reality jobs across the one or more virtual reality nodes to create the virtual reality render, the virtual reality render map being created based on at least the availability of the plurality of virtual reality nodes; and process the plurality of virtual reality jobs at the plurality of virtual reality nodes to create the virtual reality.
    Type: Application
    Filed: September 16, 2019
    Publication date: January 9, 2020
    Inventors: Anatoly D. ADAMOV, Arthur VAN HOFF, Christopher P. REDMANN, Aleksandr O. RYZHOV
  • Publication number: 20200007743
    Abstract: The disclosure includes a camera array comprising camera modules, the camera modules comprising a master camera that includes a processor, a memory, a sensor, a lens, a status indicator, and a switch, the switch configured to instruct each of the camera modules to initiate a start operation to start recording video data using the lens and the sensor in the other camera modules and the switch configured to instruct each of the camera modules to initiate a stop operation to stop recording, the status indicator configured to indicate a status of at least one of the camera modules.
    Type: Application
    Filed: September 12, 2019
    Publication date: January 2, 2020
    Inventors: Arthur van Hoff, Thomas M. Annau, Jens Christensen, Koji Gardiner
  • Publication number: 20190394492
    Abstract: A method includes receiving head-tracking data that describe one or more positions of people while the people are viewing a three-dimensional video. The method further includes generating a probabilistic model of the one or more positions of the people based on the head-tracking data, wherein the probabilistic model identifies a probability of a viewer looking in a particular direction as a function of time. The method further includes generating video segments from the three-dimensional video. The method further includes, for each of the video segments: determining a directional encoding format that projects latitudes and longitudes of locations of a surface of a sphere onto locations on a plane, determining a cost function that identifies a region of interest on the plane based on the probabilistic model, and generating optimal segment parameters that minimize a sum-over position for the region of interest.
    Type: Application
    Filed: September 3, 2019
    Publication date: December 26, 2019
    Inventors: Andrew Walkingshaw, Arthur van Hoff, Daniel Kopeinigg
  • Patent number: 10440398
    Abstract: A method includes receiving head-tracking data that describe one or more positions of people while the people are viewing a three-dimensional video. The method further includes generating a probabilistic model of the one or more positions of the people based on the head-tracking data, wherein the probabilistic model identifies a probability of a viewer looking in a particular direction as a function of time. The method further includes generating video segments from the three-dimensional video. The method further includes, for each of the video segments: determining a directional encoding format that projects latitudes and longitudes of locations of a surface of a sphere onto locations on a plane, determining a cost function that identifies a region of interest on the plane based on the probabilistic model, and generating optimal segment parameters that minimize a sum-over position for the region of interest.
    Type: Grant
    Filed: June 8, 2017
    Date of Patent: October 8, 2019
    Inventors: Andrew Walkingshaw, Arthur van Hoff, Daniel Kopeinigg
  • Publication number: 20190306434
    Abstract: The disclosure includes a camera array comprising camera modules, the camera modules comprising a master camera that includes a processor, a memory, a sensor, a lens, a status indicator, and a switch, the switch configured to instruct each of the camera modules to initiate a start operation to start recording video data using the lens and the sensor in the other camera modules and the switch configured to instruct each of the camera modules to initiate a stop operation to stop recording, the status indicator configured to indicate a status of at least one of the camera modules. Lens distortion effects may be removed from the frames described by the video data. The camera modules of the camera array are configured to provide a 3× field of view overlap.
    Type: Application
    Filed: June 17, 2019
    Publication date: October 3, 2019
    Inventors: Thomas M. Annau, Arthur van Hoff, Jens Christensen
  • Patent number: D865846
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: November 5, 2019
    Inventors: Arthur van Hoff, Thomas M. Annau, Jens Christensen, Koji Gardiner, Punit Govenji, James Dunn