Patents by Inventor Bohan LI
Bohan LI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11924467Abstract: Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere.Type: GrantFiled: November 16, 2021Date of Patent: March 5, 2024Assignee: GOOGLE LLCInventors: Bohan Li, Ching-Han Chiang, Jingning Han, Yao Yao
-
Patent number: 11917128Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame.Type: GrantFiled: November 5, 2020Date of Patent: February 27, 2024Assignee: GOOGLE LLCInventors: Bohan Li, Yaowu Xu, Jingning Han
-
Publication number: 20240038463Abstract: The present disclosure discloses a reinforced rubber dome, including a support portion, an elastic connection portion, and a pressing portion connected in sequence, the support portion being provided with an exhaust groove penetrating inside and outside, and the pressing portion having a columnar triggering portion at an axially inner side thereof. An outer side wall of the elastic connection portion is further provided with elastic reinforcing bars, an inner wall of the elastic reinforcing bar is fitted on the outer side wall of the elastic connection portion, two ends of the elastic reinforcing bar are respectively connected to the support portion and the pressing portion. The reinforced rubber dome provided by the disclosure increases the strength of a pressing and bending area by arranging the elastic reinforcing bars of any number, any shape and any size on the outer side wall of the elastic connection portion.Type: ApplicationFiled: October 7, 2023Publication date: February 1, 2024Inventors: Wei Zou, Bohan Li, Peiyun Zhang
-
Patent number: 11876974Abstract: Motion prediction using optical flow is determined to be available for a current frame in response to determining that a reference frame buffer includes, with respect to the current frame, a forward reference frame and a backward reference frame. A flag indicating whether a current block is encoded using optical flow is decoded. Responsive to determining that the flag indicates that the current block is encoded using optical flow, a motion vector is decoded for the current block; a location of an optical flow reference block is identified within an optical flow reference frame based on the motion vector; subsequent to identifying the location of the optical flow reference block, the optical flow reference block is generated using the forward reference frame and the backward reference frame without generating the optical flow reference frame; and the current block is decoded based on the optical flow reference block.Type: GrantFiled: May 6, 2022Date of Patent: January 16, 2024Assignee: GOOGLE LLCInventors: Yaowu Xu, Bohan Li, Jingning Han
-
Publication number: 20230308679Abstract: Video coding using motion prediction coding with coframe motion vectors includes generating a reference coframe spatiotemporally concurrent with a current frame from a sequence of input frames, wherein each frame from the sequence of input frames has a respective sequential location in the sequence of input frames, and wherein the current frame has a current sequential location in the sequence of input frames, generating an encoded frame by encoding the current frame using the reference coframe, including the encoded frame in an encoded bitstream, and outputting the encoded bitstream.Type: ApplicationFiled: May 25, 2023Publication date: September 28, 2023Inventors: Bohan Li, Yaowu Xu, Jingning Han
-
Patent number: 11665365Abstract: Video coding may include generating, by a processor executing instructions stored on a non-transitory computer-readable medium, an encoded frame by encoding a current frame from an input bitstream, by generating a reference coframe spatiotemporally corresponding to the current frame, wherein the current frame is a frame from a sequence of input frames, wherein each frame from the sequence of input frames has a respective sequential location in the sequence of input frames, and wherein the current frame has a current sequential location in the sequence of input frames, and encoding the current frame using the reference coframe. Video coding may include including the encoded frame in an output bitstream and outputting the output bitstream.Type: GrantFiled: September 14, 2018Date of Patent: May 30, 2023Assignee: GOOGLE LLCInventors: Bohan Li, Yaowu Xu, Jingning Han
-
Publication number: 20230156221Abstract: Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere.Type: ApplicationFiled: November 16, 2021Publication date: May 18, 2023Inventors: Bohan Li, Ching-Han Chiang, Jingning Han, Yao Yao
-
Publication number: 20220366153Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.Type: ApplicationFiled: May 12, 2021Publication date: November 17, 2022Inventors: Ji LI, Konstantin SELESKEROV, Huey-Ru TSAI, Muin Barkatali MOMIN, Ramya TRIDANDAPANI, Sindhu Vigasini JAMBUNATHAN, Amit SRIVASTAVA, Derek Martin JOHNSON, Gencheng WU, Sheng ZHAO, Xinfeng CHEN, Bohan LI
-
Publication number: 20220264109Abstract: Motion prediction using optical flow is determined to be available for a current frame in response to determining that a reference frame buffer includes, with respect to the current frame, a forward reference frame and a backward reference frame. A flag indicating whether a current block is encoded using optical flow is decoded. Responsive to determining that the flag indicates that the current block is encoded using optical flow, a motion vector is decoded for the current block; a location of an optical flow reference block is identified within an optical flow reference frame based on the motion vector; subsequent to identifying the location of the optical flow reference block, the optical flow reference block is generated using the forward reference frame and the backward reference frame without generating the optical flow reference frame; and the current block is decoded based on the optical flow reference block.Type: ApplicationFiled: May 6, 2022Publication date: August 18, 2022Inventors: Yaowu Xu, Bohan Li, Jingning Han
-
Patent number: 11350102Abstract: Decoding a current block of a current frame includes selecting a first reference frame for forward inter prediction of the current frame; selecting a second reference frame for backward inter prediction of the current frame; generating an optical flow reference frame portion for inter prediction of the current block by performing an optical flow estimation using the first reference frame and the second reference frame, where the optical flow estimation produces a respective motion field for pixels of the current block; and performing a prediction process for the current block using the optical flow reference frame portion by: using a motion vector used to encode the current block to identify a reference block; adjusting boundaries of the reference block using a subpixel interpolation filter length; and identifying blocks encompassing pixels within the adjusted boundaries of the reference block.Type: GrantFiled: May 5, 2020Date of Patent: May 31, 2022Assignee: GOOGLE LLCInventors: Yaowu Xu, Bohan Li, Jingning Han
-
Patent number: 11284107Abstract: An optical flow reference frame is generated that can be used for inter prediction of blocks of a current frame in a video sequence. A first (e.g., forward) reference frame and a second (e.g., backward reference frame) are used in an optical flow estimation that produces a respective motion field for pixels of the current frame. The motion fields are used to warp the reference frames to the current frame. The warped reference frames are blended to forms the optical flow reference frame.Type: GrantFiled: August 22, 2017Date of Patent: March 22, 2022Assignee: GOOGLE LLCInventors: Yaowu Xu, Bohan Li, Jingning Han
-
Publication number: 20210144364Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame.Type: ApplicationFiled: November 5, 2020Publication date: May 13, 2021Inventors: Bohan Li, Yaowu Xu, Jingning Han
-
Publication number: 20210117619Abstract: The disclosure discloses a cyberbullying detection method and system. The detection method includes: obtaining a to-be-detected data set, where the to-be-detected data set includes multiple sentence texts of multiple users; classifying the to-be-detected data set by using a classification model based on a bidirectional recurrent neural network, to obtain a probability that each sentence text belongs to cyberbullying; obtaining a sentence text whose probability of belonging to cyberbullying is greater than a specified probability, to obtain a first sentence text set; obtaining an attention value of each sentence text in the first sentence text set and an attention value of each user; detecting, according to the attention value of each sentence text in the first sentence text set and the attention value of each user, whether each sentence text belongs to cyberbullying. The disclosure can achieve a good text classification and identification effect, high accuracy, and a low loss rate.Type: ApplicationFiled: October 16, 2020Publication date: April 22, 2021Inventors: Bohan Li, Anman Zhang, Shuo Wan, Wenhuan Wang, Xueliang Wang, Xue Li
-
Publication number: 20200327403Abstract: An all-optical neural network that utilizes light beams and optical components to implement layers of the neural network is disclosed herein. The all-optical neural network includes an input layer, zero or more hidden layers, and an output layer. Each layer of the neural network is configured to simulate linear and nonlinear operations of a conventional artificial neural network neuron on an optical signal. In an embodiment, the optical linear operation is performed by a spatial light modulator and an optical lens. The optical lens performs a Fourier transformation on the set of light beams and sums light beams with similar propagation orientations. The optical nonlinear operation is implemented utilizing a nonlinear optical medium having an electromagnetically induced transparency characteristic whose transmission of a probe beam of light is controlled by the intermediate output of a coupling beam of light from the optical linear operation.Type: ApplicationFiled: April 14, 2020Publication date: October 15, 2020Inventors: Shengwang DU, Junwei LIU, Ying ZUO, Bohan LI, Yujun ZHAO, Yue JIANG, Peng CHEN, You-Chiuan CHEN
-
Publication number: 20200267391Abstract: Decoding a current block of a current frame includes selecting a first reference frame for forward inter prediction of the current frame; selecting a second reference frame for backward inter prediction of the current frame; generating an optical flow reference frame portion for inter prediction of the current block by performing an optical flow estimation using the first reference frame and the second reference frame, where the optical flow estimation produces a respective motion field for pixels of the current block; and performing a prediction process for the current block using the optical flow reference frame portion by: using a motion vector used to encode the current block to identify a reference block; adjusting boundaries of the reference block using a subpixel interpolation filter length; and identifying blocks encompassing pixels within the adjusted boundaries of the reference block.Type: ApplicationFiled: May 5, 2020Publication date: August 20, 2020Inventors: Yaowu Xu, Bohan Li, Jingning Han
-
Patent number: 10743025Abstract: The present invention provides a method for performing transformation using a layered Givens transform (LGT), comprising the steps of: deriving at least one rotation layer and at least one permutation layer on the basis of a given transform matrix (H) and a given error parameter; acquiring a LGT coefficient on the basis of the rotation layer and the permutation layer; and quantizing and entropy-encoding the LGT coefficient, wherein the permutation layer comprises a permutation matrix obtained by permuting a row of an identity matrix.Type: GrantFiled: September 1, 2017Date of Patent: August 11, 2020Assignee: LG ELECTRONICS INC.Inventors: Bohan Li, Arash Vosoughi, Onur G. Guleryuz
-
Patent number: 10659788Abstract: An optical flow reference frame portion (e.g., a block or an entire frame) is generated that can be used for inter prediction of blocks of a current frame in a video sequence. A forward reference frame and a backward reference frame are used in an optical flow estimation that produces a respective motion field for pixels of a current frame. The motion fields are used to warp some or all pixels of the reference frames to the pixels of the current frame. The warped reference frame pixels are blended to form the optical flow reference frame portion. The inter prediction may be performed as part of encoding or decoding portions of the current frame.Type: GrantFiled: November 20, 2017Date of Patent: May 19, 2020Assignee: GOOGLE LLCInventors: Yaowu Xu, Bohan Li, Jingning Han
-
Publication number: 20200092576Abstract: Video coding may include generating, by a processor executing instructions stored on a non-transitory computer-readable medium, an encoded frame by encoding a current frame from an input bitstream, by generating a reference coframe spatiotemporally corresponding to the current frame, wherein the current frame is a frame from a sequence of input frames, wherein each frame from the sequence of input frames has a respective sequential location in the sequence of input frames, and wherein the current frame has a current sequential location in the sequence of input frames, and encoding the current frame using the reference coframe. Video coding may include including the encoded frame in an output bitstream and outputting the output bitstream.Type: ApplicationFiled: September 14, 2018Publication date: March 19, 2020Inventors: Bohan Li, Yaowu Xu, Jingning Han
-
Publication number: 20190349602Abstract: The present invention provides a method for performing transformation using a layered Givens transform (LGT), comprising the steps of: deriving at least one rotation layer and at least one permutation layer on the basis of a given transform matrix (H) and a given error parameter; acquiring a LGT coefficient on the basis of the rotation layer and the permutation layer; and quantizing and entropy-encoding the LGT coefficient, wherein the permutation layer comprises a permutation matrix obtained by permuting a row of an identity matrix.Type: ApplicationFiled: September 1, 2017Publication date: November 14, 2019Inventors: Bohan LI, Arash VOSOUGHI, Onur G. GULERYUZ
-
Publication number: 20190158843Abstract: An optical flow reference frame portion (e.g., a block or an entire frame) is generated that can be used for inter prediction of blocks of a current frame in a video sequence. A forward reference frame and a backward reference frame are used in an optical flow estimation that produces a respective motion field for pixels of a current frame. The motion fields are used to warp some or all pixels of the reference frames to the pixels of the current frame. The warped reference frame pixels are blended to form the optical flow reference frame portion. The inter prediction may be performed as part of encoding or decoding portions of the current frame.Type: ApplicationFiled: November 20, 2017Publication date: May 23, 2019Inventors: Yaowu Xu, Bohan Li, Jingning Han