Patents by Inventor Xue Bai

Xue Bai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135973
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for identifying candidate boundaries for video segments, video segment selection using those boundaries, and text-based video editing of video segments selected via transcript interactions. In an example implementation, boundaries of detected sentences and words are extracted from a transcript, the boundaries are retimed into an adjacent speech gap to a location where voice or audio activity is a minimum, and the resulting boundaries are stored as candidate boundaries for video segments. As such, a transcript interface presents the transcript, interprets input selecting transcript text as an instruction to select a video segment with corresponding boundaries selected from the candidate boundaries, and interprets commands that are traditionally thought of as text-based operations (e.g., cut, copy, paste) as an instruction to perform a corresponding video editing operation using the selected video segment.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 25, 2024
    Inventors: Xue BAI, Justin Jonathan SALAMON, Aseem Omprakash AGARWALA, Hijung SHIN, Haoran CAI, Joel Richard BRANDT, Lubomira Assenova DONTCHEVA, Cristin Ailidh Fraser
  • Publication number: 20240127855
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for selection of the best image of a particular speaker's face in a video, and visualization in a diarized transcript. In an example embodiment, candidate images of a face of a detected speaker are extracted from frames of a video identified by a detected face track for the face, and a representative image of the detected speaker's face is selected from the candidate images based on image quality, facial emotion (e.g., using an emotion classifier that generates a happiness score), a size factor (e.g., favoring larger images), and/or penalizing images that appear towards the beginning or end of a face track. As such, each segment of the transcript is presented with the representative image of the speaker who spoke that segment and/or input is accepted changing the representative image associated with each speaker.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Inventors: Lubomira Assenova DONTCHEVA, Xue BAI, Aseem Omprakash AGARWALA, Joel Richard BRANDT
  • Publication number: 20240127820
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for music-aware speaker diarization. In an example embodiment, one or more audio classifiers detect speech and music independently of each other, which facilitates detecting regions in an audio track that contain music but do not contain speech. These music-only regions are compared to the transcript, and any transcription and speakers that overlap in time with the music-only regions are removed from the transcript. In some embodiments, rather than having the transcript display the text from this detected music, a visual representation of the audio waveform is included in the corresponding regions of the transcript.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Inventors: Justin Jonathan SALAMON, Fabian David CABA HEILBRON, Xue BAI, Aseem Omprakash AGARWALA, Hijung SHIN, Lubomira Assenova DONTCHEVA
  • Publication number: 20240127857
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for face-aware speaker diarization. In an example embodiment, an audio-only speaker diarization technique is applied to generate an audio-only speaker diarization of a video, an audio-visual speaker diarization technique is applied to generate a face-aware speaker diarization of the video, and the audio-only speaker diarization is refined using the face-aware speaker diarization to generate a hybrid speaker diarization that links detected faces to detected voices. In some embodiments, to accommodate videos with small faces that appear pixelated, a cropped image of any given face is extracted from each frame of the video, and the size of the cropped image is used to select a corresponding active speaker detection model to predict an active speaker score for the face in the cropped image.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Inventors: Fabian David CABA HEILBRON, Xue BAI, Aseem Omprakash AGARWALA, Haoran CAI, Lubomira Assenova DONTCHEVA
  • Publication number: 20240096863
    Abstract: Embodiments of the present disclosure disclose a method for manufacturing a light-emitting substrate and a light-emitting substrate. The method for manufacturing a light-emitting substrate includes: forming a plurality of light emitting diode (LED) chips on a substrate, wherein a spacer region is deposed between adjacent LED chips; forming a black photoresist layer on the substrate to cover the plurality of LED chips and the plurality of spacer regions; performing first exposure on the black photoresist layer to reduce the black photoresist layer on the LED chips; and performing second exposure on the black photoresist layer to cure the black photoresist layer.
    Type: Application
    Filed: October 27, 2021
    Publication date: March 21, 2024
    Applicant: Shenzhen China Star Optoelectronics Semiconductor Display Technology Co., Ltd.
    Inventor: Xue BAI
  • Patent number: 11922695
    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: March 5, 2024
    Assignee: Adobe Inc.
    Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
  • Patent number: 11899917
    Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a video timeline. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline. For example, a click or tap on a video segment or a drag operation dragging along the timeline snaps selection boundaries to corresponding segment boundaries defined by the level. Navigating to a different level of the hierarchy transforms the selection into coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.
    Type: Grant
    Filed: October 19, 2022
    Date of Patent: February 13, 2024
    Assignee: ADOBE INC.
    Inventors: Seth Walker, Joy O Kim, Aseem Agarwala, Joel Richard Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
  • Patent number: 11894081
    Abstract: A method for programming a target memory cell of a memory array of a non-volatile memory system, the method comprises determining a total number of erase/programming (EP) cycles that were applied previously to the memory cell and, (1) if the determined total number of cycles does not exceed a threshold value, applying an asymmetric programming scheme, and, (2) if the determined total number of cycles exceeds the threshold value, applying a symmetric programming scheme. Further, a magnitude of a boosting voltage bias (VPASS) that is to be applied to an unselected word line may be determined according to the determined total number of erase/programming (EP) cycles.
    Type: Grant
    Filed: March 2, 2022
    Date of Patent: February 6, 2024
    Assignee: SANDISK TECHNOLOGIES LLC
    Inventors: Yu-Chung Lien, Xue Bai Pitner, Ken Oowada
  • Patent number: 11893794
    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
  • Patent number: 11887371
    Abstract: Embodiments are directed to a thumbnail segmentation that defines the locations on a video timeline where thumbnails are displayed. Candidate thumbnail locations are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate thumbnail separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation (e.g., the width of a thumbnail) between consecutive thumbnail locations on a video timeline. The thumbnail segmentation is computed by solving a shortest path problem through a graph that models different thumbnail locations and separations. As such, a video timeline is displayed with thumbnails at locations on the timeline defined by the thumbnail segmentation, with each thumbnail depicting a portion of the video associated with the thumbnail location.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Patent number: 11887629
    Abstract: Embodiments are directed to interactive tiles that represent video segments of a segmentation of a video. In some embodiments, each interactive tile represents a different video segment from a particular video segmentation (e.g., a default video segmentation). Each interactive tile includes a thumbnail (e.g., the first frame of the video segment represented by the tile), some transcript from the beginning of the video segment, a visualization of detected faces in the video segment, and one or more faceted timelines that visualize a category of detected features (e.g., a visualization of detected visual scenes, audio classifications, visual artifacts). In some embodiments, interacting with a particular interactive tile navigates to a corresponding portion of the video, adds a corresponding video segment to a selection, and/or scrubs through tile thumbnails.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Patent number: 11880408
    Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation by performing a metadata search. Generally, various types of metadata can be extracted from a video, such as a transcript of audio, keywords from the transcript, content or action tags visually extracted from video frames, and log event tags extracted from an associated temporal log. The extracted metadata is segmented into metadata segments and associated with corresponding video segments defined by a hierarchical video segmentation. As such, a metadata search can be performed to identify matching metadata segments and corresponding matching video segments defined by a particular level of the hierarchical segmentation. Matching metadata segments are emphasized in a composite list of the extracted metadata, and matching video segments are emphasized on the video timeline. Navigating to a different level of the hierarchy transforms the search results into corresponding coarser or finer segments defined by the level.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: January 23, 2024
    Assignee: Adobe Inc.
    Inventors: Seth Walker, Joy Oakyung Kim, Morgan Nicole Evans, Najika Skyler Halsema Yoo, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
  • Patent number: 11875568
    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
  • Publication number: 20230417952
    Abstract: A display panel and a display device are provided. The display panel includes a panel main body and an anti-reflective layer disposed on a light-emitting surface of the panel main body. The anti-reflective layer includes an anti-reflective functional layer and a haze adhesive layer disposed between the anti-reflective functional layer and the panel main body, and the haze adhesive layer is an organic adhesive layer doped with first scattering particles.
    Type: Application
    Filed: June 24, 2022
    Publication date: December 28, 2023
    Inventors: Xue BAI, Miao ZHOU
  • Publication number: 20230390743
    Abstract: A catalyst for producing dibasic amine by hydrogenation of dibasic nitrile contains the following components or reaction product thereof: a) an active component, wherein the active component comprises Ni and/or an oxide thereof; b) an auxiliary, wherein the auxiliary comprises one or more of Mg, Cu, Co, Zn, Zr, Mo and/or oxides thereof; C) support, wherein the relative content of ?-NiO in the catalyst is less than 2.0 a.u. A process for producing dibasic amine by hydrogenation of dibasic nitrile is also provided.
    Type: Application
    Filed: October 27, 2021
    Publication date: December 7, 2023
    Inventors: Yunbao TU, Hongyuan ZONG, Zhongneng LIU, Xiaoqing XU, Xue BAI, Xu LIU, Wei FU, Yanhong WANG
  • Publication number: 20230395253
    Abstract: A cloud-edge collaborative processing system includes: an edge computing system, an ICU diagnosis and treatment device, a service terminal device, and a cloud platform. The edge computing system collects multi-source heterogeneous medical data output from the ICU diagnosis and treatment devices and performs preprocessing, stores preprocessed data into an edge database, and connects to the cloud platform for data transmission and business interaction. The cloud platform connects to a plurality of edge computing systems to perform computation and processing of massive data. The medical-staff handheld terminals and the data terminals of wards are used to issue service instructions to the cloud platform to obtain the required third-party-business services.
    Type: Application
    Filed: August 15, 2023
    Publication date: December 7, 2023
    Applicant: Shanghai SVM Medical Technology Co., Ltd.
    Inventors: Yun LONG, Xiaobo HUANG, Longxiang SU, Chun PAN, Yingchuan LI, Jicheng ZHANG, Yundai CHEN, Weiming LIU, You SHANG, Hongli HE, Qixing WANG, Zhenguo ZENG, Xiantao LI, Yunping LAN, Long XU, Baoshi HAN, Xue BAI, Xianlong LIU, Bin ZHU, Zujun TANG, Haoyu YANG, Jinjing ZHANG
  • Patent number: 11822602
    Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation by performing a metadata search. Generally, various types of metadata can be extracted from a video, such as a transcript of audio, keywords from the transcript, content or action tags visually extracted from video frames, and log event tags extracted from an associated temporal log. The extracted metadata is segmented into metadata segments and associated with corresponding video segments defined by a hierarchical video segmentation. As such, a metadata search can be performed to identify matching metadata segments and corresponding matching video segments defined by a particular level of the hierarchical segmentation. Matching metadata segments are emphasized in a composite list of the extracted metadata, and matching video segments are emphasized on the video timeline. Navigating to a different level of the hierarchy transforms the search results into corresponding coarser or finer segments defined by the level.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: November 21, 2023
    Assignee: Adobe Inc.
    Inventors: Seth Walker, Joy Oakyung Kim, Morgan Nicole Evans, Najika Skyler Halsema Yoo, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
  • Patent number: 11780981
    Abstract: The present disclosure relates to foam beads of an elastomeric composition comprising a propylene-based elastomer. The foam bead has a density of less than 0.5 g/cm3. The foam beads can be made from pellets of elastomeric compositions by expanding with supercritical fluid blowing agent. The foamed bead has reduced lightness while maintaining elasticity.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: October 10, 2023
    Assignee: ExxonMobil Chemical Patents Inc.
    Inventors: Liang Li, Yadong He, Chunling Xin, Xue Bai
  • Publication number: 20230307071
    Abstract: The memory device includes a plurality of memory cells, which include a first set of memory cells and a second set of memory cells. A controller is in communication with the memory cells. The controller is configured to, in a first programming pass and then a second programming pass, program the memory cells of the first and second sets to respective final threshold voltages associated with a plurality of programmed data states. The controller is further configured to, in the first programming pass, verify the first set of memory cells at a first set of checkpoint data states and verify the second set of memory cells at a second set of checkpoint data states that is different than the first set of checkpoint data states.
    Type: Application
    Filed: March 22, 2022
    Publication date: September 28, 2023
    Applicant: SanDisk Technologies LLC
    Inventors: Xue Bai Pitner, Yu-Chung Lien, Ravi Kumar, Jiahui Yuan, Bo Lei, Zhenni Wan
  • Publication number: 20230282295
    Abstract: A method for programming a target memory cell of a memory array of a non-volatile memory system, the method comprises determining a total number of erase/programming (EP) cycles that were applied previously to the memory cell and, (1) if the determined total number of cycles does not exceed a threshold value, applying an asymmetric programming scheme, and, (2) if the determined total number of cycles exceeds the threshold value, applying a symmetric programming scheme. Further, a magnitude of a boosting voltage bias (VPASS) that is to be applied to an unselected word line may be determined according to the determined total number of erase/programming (EP) cycles.
    Type: Application
    Filed: March 2, 2022
    Publication date: September 7, 2023
    Applicant: SanDisk Technologies LLC
    Inventors: Yu-Chung Lien, Xue Bai Pitner, Ken Oowada