Patents by Inventor Bohan LI

Bohan LI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11924467
    Abstract: Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere.
    Type: Grant
    Filed: November 16, 2021
    Date of Patent: March 5, 2024
    Assignee: GOOGLE LLC
    Inventors: Bohan Li, Ching-Han Chiang, Jingning Han, Yao Yao
  • Patent number: 11917128
    Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: February 27, 2024
    Assignee: GOOGLE LLC
    Inventors: Bohan Li, Yaowu Xu, Jingning Han
  • Publication number: 20240038463
    Abstract: The present disclosure discloses a reinforced rubber dome, including a support portion, an elastic connection portion, and a pressing portion connected in sequence, the support portion being provided with an exhaust groove penetrating inside and outside, and the pressing portion having a columnar triggering portion at an axially inner side thereof. An outer side wall of the elastic connection portion is further provided with elastic reinforcing bars, an inner wall of the elastic reinforcing bar is fitted on the outer side wall of the elastic connection portion, two ends of the elastic reinforcing bar are respectively connected to the support portion and the pressing portion. The reinforced rubber dome provided by the disclosure increases the strength of a pressing and bending area by arranging the elastic reinforcing bars of any number, any shape and any size on the outer side wall of the elastic connection portion.
    Type: Application
    Filed: October 7, 2023
    Publication date: February 1, 2024
    Inventors: Wei Zou, Bohan Li, Peiyun Zhang
  • Patent number: 11876974
    Abstract: Motion prediction using optical flow is determined to be available for a current frame in response to determining that a reference frame buffer includes, with respect to the current frame, a forward reference frame and a backward reference frame. A flag indicating whether a current block is encoded using optical flow is decoded. Responsive to determining that the flag indicates that the current block is encoded using optical flow, a motion vector is decoded for the current block; a location of an optical flow reference block is identified within an optical flow reference frame based on the motion vector; subsequent to identifying the location of the optical flow reference block, the optical flow reference block is generated using the forward reference frame and the backward reference frame without generating the optical flow reference frame; and the current block is decoded based on the optical flow reference block.
    Type: Grant
    Filed: May 6, 2022
    Date of Patent: January 16, 2024
    Assignee: GOOGLE LLC
    Inventors: Yaowu Xu, Bohan Li, Jingning Han
  • Publication number: 20230308679
    Abstract: Video coding using motion prediction coding with coframe motion vectors includes generating a reference coframe spatiotemporally concurrent with a current frame from a sequence of input frames, wherein each frame from the sequence of input frames has a respective sequential location in the sequence of input frames, and wherein the current frame has a current sequential location in the sequence of input frames, generating an encoded frame by encoding the current frame using the reference coframe, including the encoded frame in an encoded bitstream, and outputting the encoded bitstream.
    Type: Application
    Filed: May 25, 2023
    Publication date: September 28, 2023
    Inventors: Bohan Li, Yaowu Xu, Jingning Han
  • Patent number: 11665365
    Abstract: Video coding may include generating, by a processor executing instructions stored on a non-transitory computer-readable medium, an encoded frame by encoding a current frame from an input bitstream, by generating a reference coframe spatiotemporally corresponding to the current frame, wherein the current frame is a frame from a sequence of input frames, wherein each frame from the sequence of input frames has a respective sequential location in the sequence of input frames, and wherein the current frame has a current sequential location in the sequence of input frames, and encoding the current frame using the reference coframe. Video coding may include including the encoded frame in an output bitstream and outputting the output bitstream.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: May 30, 2023
    Assignee: GOOGLE LLC
    Inventors: Bohan Li, Yaowu Xu, Jingning Han
  • Publication number: 20230156221
    Abstract: Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere.
    Type: Application
    Filed: November 16, 2021
    Publication date: May 18, 2023
    Inventors: Bohan Li, Ching-Han Chiang, Jingning Han, Yao Yao
  • Publication number: 20220366153
    Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.
    Type: Application
    Filed: May 12, 2021
    Publication date: November 17, 2022
    Inventors: Ji LI, Konstantin SELESKEROV, Huey-Ru TSAI, Muin Barkatali MOMIN, Ramya TRIDANDAPANI, Sindhu Vigasini JAMBUNATHAN, Amit SRIVASTAVA, Derek Martin JOHNSON, Gencheng WU, Sheng ZHAO, Xinfeng CHEN, Bohan LI
  • Publication number: 20220264109
    Abstract: Motion prediction using optical flow is determined to be available for a current frame in response to determining that a reference frame buffer includes, with respect to the current frame, a forward reference frame and a backward reference frame. A flag indicating whether a current block is encoded using optical flow is decoded. Responsive to determining that the flag indicates that the current block is encoded using optical flow, a motion vector is decoded for the current block; a location of an optical flow reference block is identified within an optical flow reference frame based on the motion vector; subsequent to identifying the location of the optical flow reference block, the optical flow reference block is generated using the forward reference frame and the backward reference frame without generating the optical flow reference frame; and the current block is decoded based on the optical flow reference block.
    Type: Application
    Filed: May 6, 2022
    Publication date: August 18, 2022
    Inventors: Yaowu Xu, Bohan Li, Jingning Han
  • Patent number: 11350102
    Abstract: Decoding a current block of a current frame includes selecting a first reference frame for forward inter prediction of the current frame; selecting a second reference frame for backward inter prediction of the current frame; generating an optical flow reference frame portion for inter prediction of the current block by performing an optical flow estimation using the first reference frame and the second reference frame, where the optical flow estimation produces a respective motion field for pixels of the current block; and performing a prediction process for the current block using the optical flow reference frame portion by: using a motion vector used to encode the current block to identify a reference block; adjusting boundaries of the reference block using a subpixel interpolation filter length; and identifying blocks encompassing pixels within the adjusted boundaries of the reference block.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: May 31, 2022
    Assignee: GOOGLE LLC
    Inventors: Yaowu Xu, Bohan Li, Jingning Han
  • Patent number: 11284107
    Abstract: An optical flow reference frame is generated that can be used for inter prediction of blocks of a current frame in a video sequence. A first (e.g., forward) reference frame and a second (e.g., backward reference frame) are used in an optical flow estimation that produces a respective motion field for pixels of the current frame. The motion fields are used to warp the reference frames to the current frame. The warped reference frames are blended to forms the optical flow reference frame.
    Type: Grant
    Filed: August 22, 2017
    Date of Patent: March 22, 2022
    Assignee: GOOGLE LLC
    Inventors: Yaowu Xu, Bohan Li, Jingning Han
  • Publication number: 20210144364
    Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame.
    Type: Application
    Filed: November 5, 2020
    Publication date: May 13, 2021
    Inventors: Bohan Li, Yaowu Xu, Jingning Han
  • Publication number: 20210117619
    Abstract: The disclosure discloses a cyberbullying detection method and system. The detection method includes: obtaining a to-be-detected data set, where the to-be-detected data set includes multiple sentence texts of multiple users; classifying the to-be-detected data set by using a classification model based on a bidirectional recurrent neural network, to obtain a probability that each sentence text belongs to cyberbullying; obtaining a sentence text whose probability of belonging to cyberbullying is greater than a specified probability, to obtain a first sentence text set; obtaining an attention value of each sentence text in the first sentence text set and an attention value of each user; detecting, according to the attention value of each sentence text in the first sentence text set and the attention value of each user, whether each sentence text belongs to cyberbullying. The disclosure can achieve a good text classification and identification effect, high accuracy, and a low loss rate.
    Type: Application
    Filed: October 16, 2020
    Publication date: April 22, 2021
    Inventors: Bohan Li, Anman Zhang, Shuo Wan, Wenhuan Wang, Xueliang Wang, Xue Li
  • Publication number: 20200327403
    Abstract: An all-optical neural network that utilizes light beams and optical components to implement layers of the neural network is disclosed herein. The all-optical neural network includes an input layer, zero or more hidden layers, and an output layer. Each layer of the neural network is configured to simulate linear and nonlinear operations of a conventional artificial neural network neuron on an optical signal. In an embodiment, the optical linear operation is performed by a spatial light modulator and an optical lens. The optical lens performs a Fourier transformation on the set of light beams and sums light beams with similar propagation orientations. The optical nonlinear operation is implemented utilizing a nonlinear optical medium having an electromagnetically induced transparency characteristic whose transmission of a probe beam of light is controlled by the intermediate output of a coupling beam of light from the optical linear operation.
    Type: Application
    Filed: April 14, 2020
    Publication date: October 15, 2020
    Inventors: Shengwang DU, Junwei LIU, Ying ZUO, Bohan LI, Yujun ZHAO, Yue JIANG, Peng CHEN, You-Chiuan CHEN
  • Publication number: 20200267391
    Abstract: Decoding a current block of a current frame includes selecting a first reference frame for forward inter prediction of the current frame; selecting a second reference frame for backward inter prediction of the current frame; generating an optical flow reference frame portion for inter prediction of the current block by performing an optical flow estimation using the first reference frame and the second reference frame, where the optical flow estimation produces a respective motion field for pixels of the current block; and performing a prediction process for the current block using the optical flow reference frame portion by: using a motion vector used to encode the current block to identify a reference block; adjusting boundaries of the reference block using a subpixel interpolation filter length; and identifying blocks encompassing pixels within the adjusted boundaries of the reference block.
    Type: Application
    Filed: May 5, 2020
    Publication date: August 20, 2020
    Inventors: Yaowu Xu, Bohan Li, Jingning Han
  • Patent number: 10743025
    Abstract: The present invention provides a method for performing transformation using a layered Givens transform (LGT), comprising the steps of: deriving at least one rotation layer and at least one permutation layer on the basis of a given transform matrix (H) and a given error parameter; acquiring a LGT coefficient on the basis of the rotation layer and the permutation layer; and quantizing and entropy-encoding the LGT coefficient, wherein the permutation layer comprises a permutation matrix obtained by permuting a row of an identity matrix.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: August 11, 2020
    Assignee: LG ELECTRONICS INC.
    Inventors: Bohan Li, Arash Vosoughi, Onur G. Guleryuz
  • Patent number: 10659788
    Abstract: An optical flow reference frame portion (e.g., a block or an entire frame) is generated that can be used for inter prediction of blocks of a current frame in a video sequence. A forward reference frame and a backward reference frame are used in an optical flow estimation that produces a respective motion field for pixels of a current frame. The motion fields are used to warp some or all pixels of the reference frames to the pixels of the current frame. The warped reference frame pixels are blended to form the optical flow reference frame portion. The inter prediction may be performed as part of encoding or decoding portions of the current frame.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: May 19, 2020
    Assignee: GOOGLE LLC
    Inventors: Yaowu Xu, Bohan Li, Jingning Han
  • Publication number: 20200092576
    Abstract: Video coding may include generating, by a processor executing instructions stored on a non-transitory computer-readable medium, an encoded frame by encoding a current frame from an input bitstream, by generating a reference coframe spatiotemporally corresponding to the current frame, wherein the current frame is a frame from a sequence of input frames, wherein each frame from the sequence of input frames has a respective sequential location in the sequence of input frames, and wherein the current frame has a current sequential location in the sequence of input frames, and encoding the current frame using the reference coframe. Video coding may include including the encoded frame in an output bitstream and outputting the output bitstream.
    Type: Application
    Filed: September 14, 2018
    Publication date: March 19, 2020
    Inventors: Bohan Li, Yaowu Xu, Jingning Han
  • Publication number: 20190349602
    Abstract: The present invention provides a method for performing transformation using a layered Givens transform (LGT), comprising the steps of: deriving at least one rotation layer and at least one permutation layer on the basis of a given transform matrix (H) and a given error parameter; acquiring a LGT coefficient on the basis of the rotation layer and the permutation layer; and quantizing and entropy-encoding the LGT coefficient, wherein the permutation layer comprises a permutation matrix obtained by permuting a row of an identity matrix.
    Type: Application
    Filed: September 1, 2017
    Publication date: November 14, 2019
    Inventors: Bohan LI, Arash VOSOUGHI, Onur G. GULERYUZ
  • Publication number: 20190158843
    Abstract: An optical flow reference frame portion (e.g., a block or an entire frame) is generated that can be used for inter prediction of blocks of a current frame in a video sequence. A forward reference frame and a backward reference frame are used in an optical flow estimation that produces a respective motion field for pixels of a current frame. The motion fields are used to warp some or all pixels of the reference frames to the pixels of the current frame. The warped reference frame pixels are blended to form the optical flow reference frame portion. The inter prediction may be performed as part of encoding or decoding portions of the current frame.
    Type: Application
    Filed: November 20, 2017
    Publication date: May 23, 2019
    Inventors: Yaowu Xu, Bohan Li, Jingning Han