Patents by Inventor Crusoe Xiaodong MAO

Crusoe Xiaodong MAO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10623735
    Abstract: A method and system for layer based encoding of a 360 degrees video is provided. The method includes receiving, by a server, an input video. The input video includes multiple groups of pictures (GOPs). Each GOP starts from a major anchor frame of the input video and includes frames till next major anchor frame. The method also includes generating a first layer. The first layer includes one encoded frame per GOP. The method further includes generating a first sub-layer. The first sub-layer includes encoded frames of multiple mini-GOPs and reconstructed frames of encoded frames of the first layer. Each mini-GOP includes frames between two major anchor frames. Furthermore, the method includes outputting encoded video including the first layer and the first sub-layer.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: April 14, 2020
    Assignee: OrbViu Inc.
    Inventors: Jiandong Shen, Crusoe Xiaodong Mao, Brian Michael Christopher Watson, Frederick William Umminger, III
  • Patent number: 10616551
    Abstract: A method and system for constructing view from multiple video streams is provided. The method includes receiving a view independent stream. The method further includes selecting a first view dependent stream, wherein the view independent stream and the first view dependent stream has at least one different geometry. The method also includes generating end user views corresponding to the view independent stream and the first view dependent stream. Further, the method includes blending the end user views to generate a view for display.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: April 7, 2020
    Assignee: OrbViu Inc.
    Inventors: Brian Michael Christopher Watson, Crusoe Xiaodong Mao, Jiandong Shen, Frederick William Umminger, III
  • Patent number: 10425643
    Abstract: A method and system for view optimization of a 360 degrees video is provided. The method includes generating two-dimensional video frame from the 360 degrees video. The macroblocks are generated for the two-dimensional video frame. A foveated region of interest for the two-dimensional video frame is defined based on a given view orientation. DCT (Discrete Cosine Transform) coefficients are generated for the macroblocks. View adaptive DCT domain filtering is then performed on the DCT coefficients using the foveated region of interest. Quantization offset is calculated for the DCT coefficients using the foveated region of interest. The DCT coefficients are quantized using the quantization offset to generate encoded two-dimensional video frame for the view orientation. A new view orientation is then set as the given view orientation and steps of generating, performing, calculating, and quantizing are performed for each view orientation and each video frame to generate view optimized video.
    Type: Grant
    Filed: December 18, 2017
    Date of Patent: September 24, 2019
    Assignee: OrbViu Inc.
    Inventors: Jiandong Shen, Crusoe Xiaodong Mao, Brian Michael Christopher Watson, Frederick William Umminger, III
  • Patent number: 10406433
    Abstract: A system is provided, including the following: an input device, the input device including, a light emitting diode (LED) array; an image capture device configured to capture images of an interactive environment in which the input device is disposed, the image capture device configured to generate captured image data that is processed to determine a movement of the input device in the interactive environment; wherein a gearing is established that adjusts an amount by which the movement of the input device is mapped to movement of an image that is rendered to a display, the gearing changing to different settings during the movement of the image as the movement of the image is rendered to the display.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: September 10, 2019
    Assignee: Sony Interactive Entertainment America LLC
    Inventors: Gary M. Zalewski, Richard Marks, Crusoe Xiaodong Mao
  • Patent number: 10332242
    Abstract: Methods and system for reconstructing 360-degree video is disclosed. A video sequence V1 including a plurality of frames associated with spherical content at a first frame rate and a video sequence V2 including a plurality of frames associated with a predefined viewport at a second frame rate is received by a processor. The first frame rate is lower than the second frame rate. An interpolated video sequence V1? of the video sequence V1 is generated by creating a plurality of intermediate frames between a set of consecutive frames of the plurality of frames of the sequence V1 corresponding to the second frame rate of the video sequence V2. A pixel based blending of each intermediate frame of the plurality of the intermediate frames of sequence V1? with a corresponding frame of the plurality of frames the sequence V2 is performed to generate a fused video sequence Vm for displaying.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: June 25, 2019
    Assignee: OrbViu Inc.
    Inventors: Jiandong Shen, Crusoe Xiaodong Mao, Brian Michael Christopher Watson, Frederick William Umminger, III
  • Publication number: 20190038977
    Abstract: A system is provided, including the following: an input device, the input device including, a light emitting diode (LED) array; an image capture device configured to capture images of an interactive environment in which the input device is disposed, the image capture device configured to generate captured image data that is processed to determine a movement of the input device in the interactive environment; wherein a gearing is established that adjusts an amount by which the movement of the input device is mapped to movement of an image that is rendered to a display, the gearing changing to different settings during the movement of the image as the movement of the image is rendered to the display.
    Type: Application
    Filed: September 28, 2018
    Publication date: February 7, 2019
    Inventors: Gary M. Zalewski, Richard Marks, Crusoe Xiaodong Mao
  • Patent number: 10099130
    Abstract: A method is provided, including: receiving inertial data from an input device, the inertial data being generated from one or more inertial sensors of the input device; receiving captured image data from an image capture device configured to capture images of an interactive environment in which the input device is disposed, the input device having a light emitting diode (LED) array that generates infrared light; processing the inertial data and the captured image data to determine a movement of the input device in the interactive environment; establishing a gearing that adjusts an amount by which the movement of the input device is mapped to movement of an image that is rendered to a display; changing the gearing to different settings during the movement of the image as the movement of the image is rendered to the display.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: October 16, 2018
    Assignee: Sony Interactive Entertainment America LLC
    Inventors: Gary M. Zalewski, Richard Marks, Crusoe Xiaodong Mao
  • Publication number: 20180288558
    Abstract: A method and system for generating view adaptive spatial audio is disclosed. The method includes facilitating receipt of a spatial audio. The spatial audio comprises a plurality of audio adaptation sets, each audio adaptation set associated with a region among a plurality of regions, each audio adaptation set comprising one or more audio signals encoded at one or more bit rates, each of the one or more audio signals segmented into a plurality of audio segments. The method includes detecting a change in region from a source region to a destination region associated with a change in a head orientation of a user. The source region and the destination region are from among the plurality of regions. Further, the method includes facilitating a playback of the spatial audio by at least in part performing crossfading between at least one audio segment each of the source region and the destination region.
    Type: Application
    Filed: March 28, 2018
    Publication date: October 4, 2018
    Inventors: Frederick William UMMINGER, III, Brian Michael Christopher WATSON, Crusoe Xiaodong MAO, Jiandong SHEN
  • Publication number: 20180227579
    Abstract: A method and system for view optimization of a 360 degrees video is provided. The method includes generating two-dimensional video frame from the 360 degrees video. The macroblocks are generated for the two-dimensional video frame. A foveated region of interest for the two-dimensional video frame is defined based on a given view orientation. DCT (Discrete Cosine Transform) coefficients are generated for the macroblocks. View adaptive DCT domain filtering is then performed on the DCT coefficients using the foveated region of interest. Quantization offset is calculated for the DCT coefficients using the foveated region of interest. The DCT coefficients are quantized using the quantization offset to generate encoded two-dimensional video frame for the view orientation. A new view orientation is then set as the given view orientation and steps of generating, performing, calculating, and quantizing are performed for each view orientation and each video frame to generate view optimized video.
    Type: Application
    Filed: December 18, 2017
    Publication date: August 9, 2018
    Inventors: Jiandong SHEN, Crusoe Xiaodong MAO, Brian Michael Christopher WATSON, Frederick William UMMINGER, III
  • Publication number: 20180220120
    Abstract: A method and system for constructing view from multiple video streams is provided. The method includes receiving a view independent stream. The method further includes selecting a first view dependent stream, wherein the view independent stream and the first view dependent stream has at least one different geometry. The method also includes generating end user views corresponding to the view independent stream and the first view dependent stream. Further, the method includes blending the end user views to generate a view for display.
    Type: Application
    Filed: January 26, 2018
    Publication date: August 2, 2018
    Inventors: Brian Michael Christopher WATSON, Crusoe Xiaodong MAO, Jiandong SHEN, Frederick William UMMINGER, III
  • Publication number: 20180218484
    Abstract: Methods and system for reconstructing 360-degree video is disclosed. A video sequence V1 including a plurality of frames associated with spherical content at a first frame rate and a video sequence V2 including a plurality of frames associated with a predefined viewport at a second frame rate is received by a processor. The first frame rate is lower than the second frame rate. An interpolated video sequence V1? of the video sequence V1 is generated by creating a plurality of intermediate frames between a set of consecutive frames of the plurality of frames of the sequence V1 corresponding to the second frame rate of the video sequence V2. A pixel based blending of each intermediate frame of the plurality of the intermediate frames of sequence V1? with a corresponding frame of the plurality of frames the sequence V2 is performed to generate a fused video sequence Vm for displaying.
    Type: Application
    Filed: January 18, 2018
    Publication date: August 2, 2018
    Inventors: Jiandong SHEN, Crusoe Xiaodong MAO, Brian Michael Christopher WATSON, Frederick William UMMINGER, III
  • Publication number: 20180213225
    Abstract: A method and system for layer based encoding of a 360 degrees video is provided. The method includes receiving, by a server, an input video. The input video includes multiple groups of pictures (GOPs). Each GOP starts from a major anchor frame of the input video and includes frames till next major anchor frame. The method also includes generating a first layer. The first layer includes one encoded frame per GOP. The method further includes generating a first sub-layer. The first sub-layer includes encoded frames of multiple mini-GOPs and reconstructed frames of encoded frames of the first layer. Each mini-GOP includes frames between two major anchor frames. Furthermore, the method includes outputting encoded video including the first layer and the first sub-layer.
    Type: Application
    Filed: January 18, 2018
    Publication date: July 26, 2018
    Inventors: Jiandong SHEN, Crusoe Xiaodong MAO, Brian Michael Christopher WATSON, Frederick William UMMINGER, III