Patents by Inventor Bohan LI
Bohan LI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250150574Abstract: A motion vector for a current block of a current frame is decoded. The motion vector for the current block refers to a first reference block in a first reference frame. A first prediction block of two or more prediction blocks is identified in the first reference frame and using the first reference block. A first grid-aligned block is identified based on the first reference block. A second reference block is identified using a motion vector of the first grid-aligned block in a second reference frame. A second prediction block of the two or more prediction blocks is identified in the second reference frame and using the second reference block. The two or more prediction blocks are combined to obtain a prediction block for the current block.Type: ApplicationFiled: March 7, 2022Publication date: May 8, 2025Inventors: Bohan Li, Yaowu Xu, Jingning Han
-
Publication number: 20250142050Abstract: Filtering an interpolated reference frame is described. The interpolated reference frame is generated by determining, from a motion field, a motion vector pointing towards a forward reference frame and a motion vector pointing towards a backward reference frame. Expanded prediction blocks, compared to the size of the block of the interpolated reference frame, are determined using the motion vectors and reference frames. The expanded prediction blocks form overlapping areas with adjacent blocks of the interpolated reference frame. The overlapping areas are filtered to mitigate discontinuities.Type: ApplicationFiled: October 25, 2024Publication date: May 1, 2025Inventors: Jingning Han, Bohan Li, Yaowu Xu, In Suk Chong
-
Patent number: 12244818Abstract: A motion vector for a current block of a current frame is decoded from a compressed bitstream. A location of a reference block within an un-generated reference frame is identified. The reference block is generated using a forward reference frame and a backward reference frame without generating the un-generated reference frame. The reference block is generated by identifying an extended reference block by extending the reference block at each boundary of the reference block by a number of pixels related to a filter length of a filter used in sub-pixel interpolation; and generating pixel values of only the extended reference block by performing a projection using the forward reference frame and the backward reference frame without generating the whole of the un-generated reference frame. The current block is then decoded based on the reference block and the motion vector.Type: GrantFiled: December 18, 2023Date of Patent: March 4, 2025Assignee: GOOGLE LLCInventors: Yaowu Xu, Bohan Li, Jingning Han
-
Publication number: 20250071319Abstract: Techniques are described for motion vector resolution based motion vector prediction for video coding. A motion vector precision level for coding a current block is determined, a motion vector reference list is generated using the motion vector precision level, an index into the motion vector reference list is determined, where the index identifies a motion vector candidate from the motion vector reference list, and a motion vector for inter prediction of the current block is coded using the motion vector candidate. The motion vector precision level can indicate a single resolution for generating the motion vector reference list or a first resolution for generating the motion vector reference list and a second resolution for coding motion vector residuals of the motion vector.Type: ApplicationFiled: August 21, 2024Publication date: February 27, 2025Inventors: Yunqing Wang, Jingning Han, Bohan Li, Yaowu Xu
-
Patent number: 12229662Abstract: An all-optical neural network that utilizes light beams and optical components to implement layers of the neural network is disclosed herein. The all-optical neural network includes an input layer, zero or more hidden layers, and an output layer. Each layer of the neural network is configured to simulate linear and nonlinear operations of a conventional artificial neural network neuron on an optical signal. In an embodiment, the optical linear operation is performed by a spatial light modulator and an optical lens. The optical lens performs a Fourier transformation on the set of light beams and sums light beams with similar propagation orientations. The optical nonlinear operation is implemented utilizing a nonlinear optical medium having an electromagnetically induced transparency characteristic whose transmission of a probe beam of light is controlled by the intermediate output of a coupling beam of light from the optical linear operation.Type: GrantFiled: April 14, 2020Date of Patent: February 18, 2025Assignee: THE HONG KONG UNIVERSITY OF SCIENCE AND TECHNOLOGYInventors: Shengwang Du, Junwei Liu, Ying Zuo, Bohan Li, Yujun Zhao, Yue Jiang, Peng Chen, You-Chiuan Chen
-
Patent number: 12206842Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame.Type: GrantFiled: January 26, 2024Date of Patent: January 21, 2025Assignee: GOOGLE LLCInventors: Yaowu Xu, Bohan Li, Jingning Han
-
Publication number: 20240422309Abstract: Methods, systems and apparatuses are disclosed including computer readable medium storing instructions used to encode or decode a video or a bitstream encodable or decodable using disclosed steps. The steps include reconstructing a first reference frame and a second reference frame for a current frame to be encoded or decoded, projecting motion vectors of the first reference frame and the second reference frame onto pixels of a current reference frame resulting in a first pixel in the current reference frame being associated with a plurality of projected motion vectors, and selecting a first projected motion vector from the plurality of projected motion vectors as a selected motion vector associated with the first pixel to be used for determining a pixel value of the first pixel, the selection based on magnitudes of the respective ones of the plurality of projected motion vectors.Type: ApplicationFiled: August 30, 2024Publication date: December 19, 2024Inventors: Lin Zheng, Yaowu Xu, Lester Lu, Jingning Han, Bohan Li
-
Publication number: 20240380924Abstract: Decoding a current block of a current frame includes decoding, from a compressed bitstream, one or more syntax elements indicating that a geometric transformation is to be applied; applying the geometric transformation to at least a portion of the current frame to obtain a transformed portion; and obtaining a prediction of the current block based on the transformed portion and an intra-prediction mode.Type: ApplicationFiled: April 15, 2024Publication date: November 14, 2024Inventors: Bohan Li, Debargha Mukherjee, Yaowu Xu, Jingning Han
-
Publication number: 20240320451Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.Type: ApplicationFiled: June 6, 2024Publication date: September 26, 2024Inventors: Ji LI, Konstantin SELESKEROV, Huey-Ru TSAI, Muin Barkatali MOMIN, Ramya TRIDANDAPANI, Sindhu Vigasini JAMBUNATHAN, Amit SRIVASTAVA, Derek Martin JOHNSON, Gencheng WU, Sheng ZHAO, Xinfeng CHEN, Bohan LI
-
Patent number: 12032922Abstract: Automatic generation of intelligent content is created using a system of computers including a user device and a cloud-based component that processes the user information. The system performs a process that includes receiving an input document and parsing the input document to generate inputs for a natural language generation model using a text analysis model. The natural language generation model generates one or more candidate presentation scripts based on the inputs. A presentation script is selected from the candidate presentation scripts and displayed. A text-to-speech model may be used to generate a synthesized audio presentation of the presentation script. A final presentation may be generated that includes a visual display of the input document and the corresponding audio presentation in sync with the visual display.Type: GrantFiled: May 12, 2021Date of Patent: July 9, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ji Li, Konstantin Seleskerov, Huey-Ru Tsai, Muin Barkatali Momin, Ramya Tridandapani, Sindhu Vigasini Jambunathan, Amit Srivastava, Derek Martin Johnson, Gencheng Wu, Sheng Zhao, Xinfeng Chen, Bohan Li
-
Publication number: 20240214607Abstract: Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere.Type: ApplicationFiled: March 4, 2024Publication date: June 27, 2024Inventors: Bohan Li, Ching-Han Chiang, Jingning Han, Yao Yao
-
Publication number: 20240195979Abstract: A motion vector for a current block of a current frame is decoded from a compressed bitstream. A location of a reference block within an un-generated reference frame is identified. The reference block is generated using a forward reference frame and a backward reference frame without generating the un-generated reference frame. The reference block is generated by identifying an extended reference block by extending the reference block at each boundary of the reference block by a number of pixels related to a filter length of a filter used in sub-pixel interpolation; and generating pixel values of only the extended reference block by performing a projection using the forward reference frame and the backward reference frame without generating the whole of the un-generated reference frame. The current block is then decoded based on the reference block and the motion vector.Type: ApplicationFiled: December 18, 2023Publication date: June 13, 2024Inventors: Yaowu Xu, Bohan Li, Jingning Han
-
Publication number: 20240171733Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame.Type: ApplicationFiled: January 26, 2024Publication date: May 23, 2024Inventors: Yaowu Xu, Bohan Li, Jingning Han
-
Patent number: 11924467Abstract: Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere.Type: GrantFiled: November 16, 2021Date of Patent: March 5, 2024Assignee: GOOGLE LLCInventors: Bohan Li, Ching-Han Chiang, Jingning Han, Yao Yao
-
Patent number: 11917128Abstract: A motion field estimate determined using motion vector information of two or more reference frames of a current/encoded frame is used to derive a motion vector for inter-prediction of the current/encoded frame. Motion trajectory information, including concatenated motion vectors and locations of the current/encoded frame at which those concatenated motion vectors point, is determined by concatenating motion vectors of the reference frames. A motion field estimate is determined using the motion trajectory information and, in some cases, by interpolating unavailable motion vectors using neighbors. The motion field estimate is used to determine a co-located reference frame for the current/encoded frame, and an inter-prediction process is performed for the current/encoded frame using a motion vector derived using the co-located reference frame.Type: GrantFiled: November 5, 2020Date of Patent: February 27, 2024Assignee: GOOGLE LLCInventors: Bohan Li, Yaowu Xu, Jingning Han
-
Publication number: 20240038463Abstract: The present disclosure discloses a reinforced rubber dome, including a support portion, an elastic connection portion, and a pressing portion connected in sequence, the support portion being provided with an exhaust groove penetrating inside and outside, and the pressing portion having a columnar triggering portion at an axially inner side thereof. An outer side wall of the elastic connection portion is further provided with elastic reinforcing bars, an inner wall of the elastic reinforcing bar is fitted on the outer side wall of the elastic connection portion, two ends of the elastic reinforcing bar are respectively connected to the support portion and the pressing portion. The reinforced rubber dome provided by the disclosure increases the strength of a pressing and bending area by arranging the elastic reinforcing bars of any number, any shape and any size on the outer side wall of the elastic connection portion.Type: ApplicationFiled: October 7, 2023Publication date: February 1, 2024Inventors: Wei Zou, Bohan Li, Peiyun Zhang
-
Patent number: 11876974Abstract: Motion prediction using optical flow is determined to be available for a current frame in response to determining that a reference frame buffer includes, with respect to the current frame, a forward reference frame and a backward reference frame. A flag indicating whether a current block is encoded using optical flow is decoded. Responsive to determining that the flag indicates that the current block is encoded using optical flow, a motion vector is decoded for the current block; a location of an optical flow reference block is identified within an optical flow reference frame based on the motion vector; subsequent to identifying the location of the optical flow reference block, the optical flow reference block is generated using the forward reference frame and the backward reference frame without generating the optical flow reference frame; and the current block is decoded based on the optical flow reference block.Type: GrantFiled: May 6, 2022Date of Patent: January 16, 2024Assignee: GOOGLE LLCInventors: Yaowu Xu, Bohan Li, Jingning Han
-
Publication number: 20230308679Abstract: Video coding using motion prediction coding with coframe motion vectors includes generating a reference coframe spatiotemporally concurrent with a current frame from a sequence of input frames, wherein each frame from the sequence of input frames has a respective sequential location in the sequence of input frames, and wherein the current frame has a current sequential location in the sequence of input frames, generating an encoded frame by encoding the current frame using the reference coframe, including the encoded frame in an encoded bitstream, and outputting the encoded bitstream.Type: ApplicationFiled: May 25, 2023Publication date: September 28, 2023Inventors: Bohan Li, Yaowu Xu, Jingning Han
-
Patent number: 11665365Abstract: Video coding may include generating, by a processor executing instructions stored on a non-transitory computer-readable medium, an encoded frame by encoding a current frame from an input bitstream, by generating a reference coframe spatiotemporally corresponding to the current frame, wherein the current frame is a frame from a sequence of input frames, wherein each frame from the sequence of input frames has a respective sequential location in the sequence of input frames, and wherein the current frame has a current sequential location in the sequence of input frames, and encoding the current frame using the reference coframe. Video coding may include including the encoded frame in an output bitstream and outputting the output bitstream.Type: GrantFiled: September 14, 2018Date of Patent: May 30, 2023Assignee: GOOGLE LLCInventors: Bohan Li, Yaowu Xu, Jingning Han
-
Publication number: 20230156221Abstract: Mapping-aware coding tools for 360 degree videos adapt conventional video coding tools for 360 degree video data using parameters related to a spherical projection of the 360 degree video data. The mapping-aware coding tools perform motion vector mapping techniques, adaptive motion search pattern techniques, adaptive interpolation filter selection techniques, and adaptive block partitioning techniques. Motion vector mapping includes calculating a motion vector for a pixel of a current block by mapping the location of the pixel within a two-dimensional plane (e.g., video frame) onto a sphere and mapping a predicted location of the pixel on the sphere determined based on rotation parameters back onto the plane. Adaptive motion searching, adaptive interpolation filter selection, and adaptive block partitioning operate according to density distortion based on locations along the sphere.Type: ApplicationFiled: November 16, 2021Publication date: May 18, 2023Inventors: Bohan Li, Ching-Han Chiang, Jingning Han, Yao Yao