Patents by Inventor Jiandong Shen

Jiandong Shen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11843776
    Abstract: In one implementation, a method of encoding an image is performed at a device including one or more processors and non-transitory memory. The method includes determining a category of a spatial portion of an image based on a relation between a plurality of thresholds associated with a plurality of quantization scaling parameters and a bit rate of the spatial portion of the image at the plurality of quantization scaling parameters. The method includes quantizing the spatial portion of the image based on the categorization.
    Type: Grant
    Filed: June 17, 2022
    Date of Patent: December 12, 2023
    Inventors: Krishnakanth Rapaka, Munehiro Nakazato, Jiandong Shen, Ganesh G. Yadav, Sorin Constantin Cismas, Jim C. Chou, Hao Pan
  • Publication number: 20220329805
    Abstract: In one implementation, a method of encoding an image is performed at a device including one or more processors and non-transitory memory. The method includes determining a category of a spatial portion of an image based on a relation between a plurality of thresholds associated with a plurality of quantization scaling parameters and a bit rate of the spatial portion of the image at the plurality of quantization scaling parameters. The method includes quantizing the spatial portion of the image based on the categorization.
    Type: Application
    Filed: June 17, 2022
    Publication date: October 13, 2022
    Inventors: Krishnakanth Rapaka, Munehiro Nakazato, Jiandong Shen, Ganesh G. Yadav, Sorin Constantin Cismas, Jim C. Chou, Hao Pan
  • Patent number: 11399180
    Abstract: In one implementation, a method of encoding an image is performed at a device including one or more processors and non-transitory memory. The method includes receiving a first image comprising a plurality of pixels having a respective plurality of pixel locations and a respective plurality of pixel values. The method includes applying a frequency transform to a first spatial portion of the first image to generate a plurality of first frequency coefficients respectively associated with a plurality of spatial frequencies and applying the frequency transform to a second spatial portion of the first image to generate a plurality of second frequency coefficients respectively associated with the plurality of spatial frequencies.
    Type: Grant
    Filed: April 9, 2020
    Date of Patent: July 26, 2022
    Assignee: APPLE INC.
    Inventors: Krishnakanth Rapaka, Munehiro Nakazato, Jiandong Shen, Ganesh G. Yadav, Sorin Constantin Cismas, Jim C. Chou, Hao Pan
  • Patent number: 10623735
    Abstract: A method and system for layer based encoding of a 360 degrees video is provided. The method includes receiving, by a server, an input video. The input video includes multiple groups of pictures (GOPs). Each GOP starts from a major anchor frame of the input video and includes frames till next major anchor frame. The method also includes generating a first layer. The first layer includes one encoded frame per GOP. The method further includes generating a first sub-layer. The first sub-layer includes encoded frames of multiple mini-GOPs and reconstructed frames of encoded frames of the first layer. Each mini-GOP includes frames between two major anchor frames. Furthermore, the method includes outputting encoded video including the first layer and the first sub-layer.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: April 14, 2020
    Assignee: OrbViu Inc.
    Inventors: Jiandong Shen, Crusoe Xiaodong Mao, Brian Michael Christopher Watson, Frederick William Umminger, III
  • Patent number: 10616551
    Abstract: A method and system for constructing view from multiple video streams is provided. The method includes receiving a view independent stream. The method further includes selecting a first view dependent stream, wherein the view independent stream and the first view dependent stream has at least one different geometry. The method also includes generating end user views corresponding to the view independent stream and the first view dependent stream. Further, the method includes blending the end user views to generate a view for display.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: April 7, 2020
    Assignee: OrbViu Inc.
    Inventors: Brian Michael Christopher Watson, Crusoe Xiaodong Mao, Jiandong Shen, Frederick William Umminger, III
  • Patent number: 10425643
    Abstract: A method and system for view optimization of a 360 degrees video is provided. The method includes generating two-dimensional video frame from the 360 degrees video. The macroblocks are generated for the two-dimensional video frame. A foveated region of interest for the two-dimensional video frame is defined based on a given view orientation. DCT (Discrete Cosine Transform) coefficients are generated for the macroblocks. View adaptive DCT domain filtering is then performed on the DCT coefficients using the foveated region of interest. Quantization offset is calculated for the DCT coefficients using the foveated region of interest. The DCT coefficients are quantized using the quantization offset to generate encoded two-dimensional video frame for the view orientation. A new view orientation is then set as the given view orientation and steps of generating, performing, calculating, and quantizing are performed for each view orientation and each video frame to generate view optimized video.
    Type: Grant
    Filed: December 18, 2017
    Date of Patent: September 24, 2019
    Assignee: OrbViu Inc.
    Inventors: Jiandong Shen, Crusoe Xiaodong Mao, Brian Michael Christopher Watson, Frederick William Umminger, III
  • Patent number: 10332242
    Abstract: Methods and system for reconstructing 360-degree video is disclosed. A video sequence V1 including a plurality of frames associated with spherical content at a first frame rate and a video sequence V2 including a plurality of frames associated with a predefined viewport at a second frame rate is received by a processor. The first frame rate is lower than the second frame rate. An interpolated video sequence V1? of the video sequence V1 is generated by creating a plurality of intermediate frames between a set of consecutive frames of the plurality of frames of the sequence V1 corresponding to the second frame rate of the video sequence V2. A pixel based blending of each intermediate frame of the plurality of the intermediate frames of sequence V1? with a corresponding frame of the plurality of frames the sequence V2 is performed to generate a fused video sequence Vm for displaying.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: June 25, 2019
    Assignee: OrbViu Inc.
    Inventors: Jiandong Shen, Crusoe Xiaodong Mao, Brian Michael Christopher Watson, Frederick William Umminger, III
  • Publication number: 20180288558
    Abstract: A method and system for generating view adaptive spatial audio is disclosed. The method includes facilitating receipt of a spatial audio. The spatial audio comprises a plurality of audio adaptation sets, each audio adaptation set associated with a region among a plurality of regions, each audio adaptation set comprising one or more audio signals encoded at one or more bit rates, each of the one or more audio signals segmented into a plurality of audio segments. The method includes detecting a change in region from a source region to a destination region associated with a change in a head orientation of a user. The source region and the destination region are from among the plurality of regions. Further, the method includes facilitating a playback of the spatial audio by at least in part performing crossfading between at least one audio segment each of the source region and the destination region.
    Type: Application
    Filed: March 28, 2018
    Publication date: October 4, 2018
    Inventors: Frederick William UMMINGER, III, Brian Michael Christopher WATSON, Crusoe Xiaodong MAO, Jiandong SHEN
  • Publication number: 20180227579
    Abstract: A method and system for view optimization of a 360 degrees video is provided. The method includes generating two-dimensional video frame from the 360 degrees video. The macroblocks are generated for the two-dimensional video frame. A foveated region of interest for the two-dimensional video frame is defined based on a given view orientation. DCT (Discrete Cosine Transform) coefficients are generated for the macroblocks. View adaptive DCT domain filtering is then performed on the DCT coefficients using the foveated region of interest. Quantization offset is calculated for the DCT coefficients using the foveated region of interest. The DCT coefficients are quantized using the quantization offset to generate encoded two-dimensional video frame for the view orientation. A new view orientation is then set as the given view orientation and steps of generating, performing, calculating, and quantizing are performed for each view orientation and each video frame to generate view optimized video.
    Type: Application
    Filed: December 18, 2017
    Publication date: August 9, 2018
    Inventors: Jiandong SHEN, Crusoe Xiaodong MAO, Brian Michael Christopher WATSON, Frederick William UMMINGER, III
  • Publication number: 20180220120
    Abstract: A method and system for constructing view from multiple video streams is provided. The method includes receiving a view independent stream. The method further includes selecting a first view dependent stream, wherein the view independent stream and the first view dependent stream has at least one different geometry. The method also includes generating end user views corresponding to the view independent stream and the first view dependent stream. Further, the method includes blending the end user views to generate a view for display.
    Type: Application
    Filed: January 26, 2018
    Publication date: August 2, 2018
    Inventors: Brian Michael Christopher WATSON, Crusoe Xiaodong MAO, Jiandong SHEN, Frederick William UMMINGER, III
  • Publication number: 20180218484
    Abstract: Methods and system for reconstructing 360-degree video is disclosed. A video sequence V1 including a plurality of frames associated with spherical content at a first frame rate and a video sequence V2 including a plurality of frames associated with a predefined viewport at a second frame rate is received by a processor. The first frame rate is lower than the second frame rate. An interpolated video sequence V1? of the video sequence V1 is generated by creating a plurality of intermediate frames between a set of consecutive frames of the plurality of frames of the sequence V1 corresponding to the second frame rate of the video sequence V2. A pixel based blending of each intermediate frame of the plurality of the intermediate frames of sequence V1? with a corresponding frame of the plurality of frames the sequence V2 is performed to generate a fused video sequence Vm for displaying.
    Type: Application
    Filed: January 18, 2018
    Publication date: August 2, 2018
    Inventors: Jiandong SHEN, Crusoe Xiaodong MAO, Brian Michael Christopher WATSON, Frederick William UMMINGER, III
  • Publication number: 20180213225
    Abstract: A method and system for layer based encoding of a 360 degrees video is provided. The method includes receiving, by a server, an input video. The input video includes multiple groups of pictures (GOPs). Each GOP starts from a major anchor frame of the input video and includes frames till next major anchor frame. The method also includes generating a first layer. The first layer includes one encoded frame per GOP. The method further includes generating a first sub-layer. The first sub-layer includes encoded frames of multiple mini-GOPs and reconstructed frames of encoded frames of the first layer. Each mini-GOP includes frames between two major anchor frames. Furthermore, the method includes outputting encoded video including the first layer and the first sub-layer.
    Type: Application
    Filed: January 18, 2018
    Publication date: July 26, 2018
    Inventors: Jiandong SHEN, Crusoe Xiaodong MAO, Brian Michael Christopher WATSON, Frederick William UMMINGER, III
  • Patent number: 7536643
    Abstract: The invention described herein provides a video analysis tool to assist a computer programmer working on a program that effects video data. The tool may be integrated with program code. When enacted, the tool obtains statistical information related to the video data and information corresponding to functions of the code. The code may be responsible for encoding, transcoding, and/or decoding video data, for example. The tool is integrated with a video decoder to allow the information to be output with raw video data. The present invention is particularly useful for developing, debugging and analyzing programs responsible for encoding, transcoding, and/or decoding video data, such as video data compressed according to an MPEG standard.
    Type: Grant
    Filed: August 17, 2005
    Date of Patent: May 19, 2009
    Assignee: Cisco Technology, Inc.
    Inventors: Shan Zhu, Ji Zhang, Jiandong Shen, Hain-Ching Liu
  • Patent number: 7295614
    Abstract: The present invention relates to systems and methods for compressing, decompressing, and transmitting video data. The systems and methods include pixel by pixel motion estimation and compensation and efficient quantization of residual errors. The present invention applies block estimation of the residual error produced by motion compensation. The block estimation is applied by a local decoder to generate synthesized blocks of video data. The block estimation approximated uses a set of predetermined motion estimation errors that are stored as error vectors in a codebook. The codebook is included in an encoder of the present invention and converts an error vector for each block to an error vector index. The error vector index, which introduces minimal transmission burden, is then sent from the encoder to a target decoder. A receiving decoder also includes a copy of the codebook and converts the error vector index to its associated error vector for reconstruction of video data.
    Type: Grant
    Filed: August 31, 2001
    Date of Patent: November 13, 2007
    Assignee: Cisco Technology, Inc.
    Inventors: Jiandong Shen, Wai-Yip Chan
  • Publication number: 20050278635
    Abstract: The invention described herein provides a video analysis tool to assist a computer programmer working on a program that effects video data. The tool may be integrated with program code. When enacted, the tool obtains statistical information related to the video data and information corresponding to functions of the code. The code may be responsible for encoding, transcoding, and/or decoding video data, for example. The tool is integrated with a video decoder to allow the information to be output with raw video data. The present invention is particularly useful for developing, debugging and analyzing programs responsible for encoding, transcoding, and/or decoding video data, such as video data compressed according to an MPEG standard.
    Type: Application
    Filed: August 17, 2005
    Publication date: December 15, 2005
    Applicant: Cisco Technology, Inc., A corporation of California
    Inventors: Shan Zhu, Ji Zhang, Jiandong Shen, Hain-Ching Liu
  • Patent number: 6968008
    Abstract: Methods for motion estimation with adaptive motion accuracy of the present invention include several techniques for computing motion vectors of high pixel accuracy with a minor increase in computation. One technique uses fast-search strategies in sub-pixel space that smartly searches for the best motion vectors. An alternate technique estimates high-accurate motion vectors using different interpolation filters at different stages in order to reduce computational complexity. Yet another technique uses rate-distortion criteria that adapts according to the different motion accuracies to determine both the best motion vectors and the best motion accuracies. Still another technique uses a VLC table that is interpreted differently at different coding units, according to the associated motion vector accuracy.
    Type: Grant
    Filed: July 13, 2000
    Date of Patent: November 22, 2005
    Assignee: Sharp Laboratories of America, Inc.
    Inventors: Jordi Ribas-Corbera, Jiandong Shen
  • Patent number: 6950464
    Abstract: The invention described herein improves and expedites compressed video data delivery by providing systems and methods for transcoding and pass through on a sub-picture level. For example, in MPEG embodiments described herein, transcoding and compressed video data pass through may occur on a macroblock or slice level. This permits portions of a picture that need no rate reduction to be passed through without transcoding. The invention may also implement pass through on a picture by picture basis. Accordingly, the invention may determine transcoding or pass through for each picture or picture subregion. Thus, systems and methods described herein provide flexible compressed video data transcoding and pass through, as determined by varying bit rate demands of compressed video data transmission.
    Type: Grant
    Filed: December 26, 2001
    Date of Patent: September 27, 2005
    Assignee: Cisco Technology, Inc.
    Inventors: Jiandong Shen, Shan Zhu, David Arnstein, Pierre Seigneurbieux
  • Patent number: RE44012
    Abstract: Methods for motion estimation with adaptive motion accuracy of the present invention include several techniques for computing motion vectors of high pixel accuracy with a minor increase in computation. One technique uses fast-search strategies in sub-pixel space that smartly searches for the best motion vectors. An alternate technique estimates high-accurate motion vectors using different interpolation filters at different stages in order to reduce computational complexity. Yet another technique uses rate-distortion criteria that adapts according to the different motion accuracies to determine both the best motion vectors and the best motion accuracies. Still another technique uses a VLC table that is interpreted differently at different coding units, according to the associated motion vector accuracy.
    Type: Grant
    Filed: November 4, 2011
    Date of Patent: February 19, 2013
    Assignee: Sharp Kabushiki Kaisha
    Inventors: Jordi Ribas-Corbera, Jiandong Shen
  • Patent number: RE45014
    Abstract: Methods for motion estimation with adaptive motion accuracy of the present invention include several techniques for computing motion vectors of high pixel accuracy with a minor increase in computation. One technique uses fast-search strategies in sub-pixel space that smartly searches for the best motion vectors. An alternate technique estimates high-accurate motion vectors using different interpolation filters at different stages in order to reduce computational complexity. Yet another technique uses rate-distortion criteria that adapts according to the different motion accuracies to determine both the best motion vectors and the best motion accuracies. Still another technique uses a VLC table that is interpreted differently at different coding units, according to the associated motion vector accuracy.
    Type: Grant
    Filed: November 20, 2007
    Date of Patent: July 15, 2014
    Assignee: Sharp Kabushiki Kaisha
    Inventors: Jordi Ribas-Corbera, Jiandong Shen
  • Patent number: RE46468
    Abstract: Methods for motion estimation with adaptive motion accuracy of the present invention include several techniques for computing motion vectors of high pixel accuracy with a minor increase in computation. One technique uses fast-search strategies in sub-pixel space that smartly searches for the best motion vectors. An alternate technique estimates high-accurate motion vectors using different interpolation filters at different stages in order to reduce computational complexity. Yet another technique uses rate-distortion criteria that adapts according to the different motion accuracies to determine both the best motion vectors and the best motion accuracies. Still another technique uses a VLC table that is interpreted differently at different coding units, according to the associated motion vector accuracy.
    Type: Grant
    Filed: January 31, 2014
    Date of Patent: July 4, 2017
    Assignee: SHARP KABUSHIKI KAISHA
    Inventors: Jordi Ribas-Corbera, Jiandong Shen