Patents by Inventor Xiaolei Huang

Xiaolei Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240134503
    Abstract: A control method and apparatus for displaying multimedia content, an electronic device, and a medium. The method includes: displaying a first-type multimedia content on a first content display layer of a first-type multimedia content display interface, the first-type multimedia content display interface including: a first user interaction layer and the first content display layer, the first user interaction layer being superimposed and displayed on the first content display layer; and receiving a first swiping operation inputted by a user on the user interaction layer, and exiting the first-type multimedia content display interface. That is, a swiping operation triggers to exit the first-type multimedia content display interface, as the swiping operation is significantly different from a click operation, the user is provided with a multimedia content display control method that is more consistent with the user's operation habit, improving the user experience.
    Type: Application
    Filed: February 25, 2022
    Publication date: April 25, 2024
    Inventors: Keke HUANG, Xue YAO, Xiaolei SHI, Mengqi WU, Weiqin LIAN, Junhao ZHANG, Zhiquan ZHANG, Bo ZHOU, Zhiyong LUO, Ji LI
  • Publication number: 20230363679
    Abstract: A system includes a mobile device for capturing raw video of a subject, a preprocessing system communicatively coupled to the mobile device for splitting the raw video into an image stream and an audio stream, an image processing system communicatively coupled to the preprocessing system for processing the image stream into a spatiotemporal facial frame sequence proposal, an audio processing system for processing the audio stream into a preprocessed audio component, one or more machine learning devices that analyze the facial frame sequence proposal and the preprocessed audio component according to a trained model to determine whether the subject is exhibiting signs of a neurological condition, and a user device for receiving data corresponding to a confirmed indication of neurological condition from the one or more machine learning devices and providing the confirmed indication of neurological condition to the subject and/or a clinician via a user interface.
    Type: Application
    Filed: September 17, 2021
    Publication date: November 16, 2023
    Applicants: THE PENN STATE RESEARCH FOUNDATION, THE METHODIST HOSPITAL
    Inventors: James Z. Wang, Mingli Yu, Tongan Cai, Xiaolei Huang, Kelvin Wong, John Volpi, Stephen T.C. Wong
  • Patent number: 11016997
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating query results based on domain-specific dynamic word embeddings. For example, the disclosed systems can generate dynamic vector representations of words that include domain-specific embedded information. In addition, the disclosed systems can compare the dynamic vector representations with vector representations of query terms received as part of a search query. The disclosed systems can further identify one or more digital content items to provide as part of a query result that include words corresponding to the query terms based on the comparison of the vector representations. In some embodiments, the disclosed systems can also train a word embedding model to generate accurate vector representations of unique words.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: May 25, 2021
    Assignee: ADOBE INC.
    Inventors: Xiaolei Huang, Franck Dernoncourt, Walter Chang
  • Patent number: 10019656
    Abstract: A computer diagnostic system and related method are disclosed for automatically classifying tissue types in an original tissue image captured by an imaging device based on texture analysis. In one embodiment, the system receives and divides the tissue image into multiple smaller tissue block images. A combination of local binary pattern (LBP), average LBP (ALBP), and block-based LBP (BLBP) feature extractions are performed on each tissue block. The extractions generate a set of LBP, ALBP, and BLBP features for each block which are used to classify its tissue type. The classification results are visually displayed in a digitally enhanced map of the original tissue image. In one embodiment, a tissue type of interest is displayed in the original tissue image. In another or the same embodiment, the map displays each of the different tissue types present in the original tissue image.
    Type: Grant
    Filed: April 13, 2016
    Date of Patent: July 10, 2018
    Inventors: Xiaolei Huang, Sunhua Wan, Chao Zhou
  • Patent number: 9519964
    Abstract: A system and methods for generating 3D images from 2D bioluminescent images and visualizing tumor locations are provided. A plurality of 2D bioluminescent images of a subject are acquired using any suitable bioluminescent imaging system. The 2D images are registered to align each image and to compensate for differences between adjacent images. After registration, corresponding features are identified between consecutive sets of 2D images. For each corresponding feature identified in each set of 2D images, an orthographic projection model is applied, such that rays are projected through each point in the feature. The intersection points of the rays are plotted in a 3D image space. All of the 2D images are processed in the same manner, such that a resulting 3D image of a tumor is generated.
    Type: Grant
    Filed: July 10, 2012
    Date of Patent: December 13, 2016
    Assignee: Rutgers, The State University of New Jersey
    Inventors: Dimitris Metaxas, Debabrata Banerjee, Xiaolei Huang
  • Publication number: 20160232425
    Abstract: A computer diagnostic system and related method are disclosed for automatically classifying tissue types in an original tissue image captured by an imaging device based on texture analysis. In one embodiment, the system receives and divides the tissue image into multiple smaller tissue block images. A combination of local binary pattern (LBP), average LBP (ALBP), and block-based LBP (BLBP) feature extractions are performed on each tissue block. The extractions generate a set of LBP, ALBP, and BLBP features for each block which are used to classify its tissue type. The classification results are visually displayed in a digitally enhanced map of the original tissue image. In one embodiment, a tissue type of interest is displayed in the original tissue image. In another or the same embodiment, the map displays each of the different tissue types present in the original tissue image.
    Type: Application
    Filed: April 13, 2016
    Publication date: August 11, 2016
    Inventors: Xiaolei Huang, Sunhua Wan, Chao Zhou
  • Publication number: 20130070992
    Abstract: A system and methods for generating 3D images from 2D bioluminescent images and visualizing tumor locations are provided. A plurality of 2D bioluminescent images of a subject are acquired using any suitable bioluminescent imaging system. The 2D images are registered to align each image and to compensate for differences between adjacent images. After registration, corresponding features are identified between consecutive sets of 2D images. For each corresponding feature identified in each set of 2D images, an orthographic projection model is applied, such that rays are projected through each point in the feature. The intersection points of the rays are plotted in a 3D image space. All of the 2D images are processed in the same manner, such that a resulting 3D image of a tumor is generated.
    Type: Application
    Filed: July 10, 2012
    Publication date: March 21, 2013
    Inventors: Dimitris Metaxas, Debabrata Banerjee, Xiaolei Huang
  • Patent number: 8218836
    Abstract: A system and methods for generating 3D images (24) from 2D bioluminescent images (22) and visualizing tumor locations are provided. A plurality of 2D bioluminescent images of a subject are acquired during a complete revolution of an imaging system about a subject, using any suitable bioluminescent imaging system. After imaging, the 2D images are registered (20) according to the rotation axis to align each image and to compensate for differences between adjacent images. After registration (20), corresponding features are identified between consecutive sets of 2D image (22). For each corresponding feature identified in each set of 2D images an orthographic projection model (24) is applied, such that rays are projected through each point in the feature. The intersection point of the rays are plotted in a 3D image of a tumor is generated. The 3D image can be registered with a reference image of the subject, so that the shape and location of the tumor can be precisely visualized with respect to the subject.
    Type: Grant
    Filed: August 31, 2006
    Date of Patent: July 10, 2012
    Assignee: Rutgers, The State University of New Jersey
    Inventors: Dimitris Metaxas, Debabrata Banerjee, Xiaolei Huang
  • Patent number: 7903857
    Abstract: Disclosed is robust click-point linking, defined as estimating a single point-wise correspondence between data domains given a user-specified point in one domain or as an interactive localized registration of a monomodal data pair. To link visually dissimilar local regions, Geometric Configuration Context (GCC) is introduced. GCC represents the spatial likelihood of the point corresponding to the click-point in the other domain. A set of scale-invariant saliency features are pre-computed for both data. GCC is modeled by a Gaussian mixture whose component mean and width are determined as a function of the neighboring saliency features and their correspondences. This allows correspondence of dissimilar parts using only geometrical relations without comparing the local appearances. GCC models are derived for three transformation classes: pure translation, scaling and translation, and similarity transformation.
    Type: Grant
    Filed: February 12, 2007
    Date of Patent: March 8, 2011
    Assignee: Siemens Medical Solutions USA, Inc.
    Inventors: Xiaolei Huang, Arun Krishnan, Kazunori Okada, Xiang Zhou
  • Patent number: 7876938
    Abstract: A method for segmenting digitized images includes providing a training set comprising a plurality of digitized whole-body images, providing labels on anatomical landmarks in each image of said training set, aligning each said training set image, generating positive and negative training examples for each landmark by cropping the aligned training volumes into one or more cropping windows of different spatial scales, and using said positive and negative examples to train a detector for each landmark at one or more spatial scales ranging from a coarse resolution to a fine resolution, wherein the spatial relationship between a cropping windows of a coarse resolution detector and a fine resolution detector is recorded.
    Type: Grant
    Filed: October 3, 2006
    Date of Patent: January 25, 2011
    Assignee: Siemens Medical Solutions USA, Inc.
    Inventors: Xiaolei Huang, Xiang Zhou, Anna Jerebko, Arun Krishnan, Haiying Guan, Toshiro Kubota, Vaclav Potesil
  • Publication number: 20090148013
    Abstract: A system and methods for generating 3D images (24) from 2D bioluminescent images (22) and visualizing tumor locations are provided. A plurality of 2D bioluminescent images of a subject are acquired during a complete revolution of an imaging system about a subject, using any suitable bioluminescent imaging system. After imaging, the 2D images are registered (20) according to the rotation axis to align each image and to compensate for differences between adjacent images. After registration (20), corresponding features are identified between consecutive sets of 2D image (22). For each corresponding feature identified in each set of 2D images an orthographic projection model (24) is applied, such that rays are projected through each point in the feature. The intersection point of the rays are plotted in a 3D image of a tumor is generated. The 3D image can be registered with a reference image of the subject, so that the shape and location of the tumor can be precisely visualized with respect to the subject.
    Type: Application
    Filed: August 31, 2006
    Publication date: June 11, 2009
    Inventors: Dimitris Metaxas, Debabrata Banerjee, Xiaolei Huang
  • Patent number: 7409108
    Abstract: A method of aligning a pair of images with a first image and a second image, wherein said images comprise a plurality of intensities corresponding to a domain of points in a D-dimensional space includes identifying feature points on both images using the same criteria, computing a feature vector for each feature point, measuring a feature dissimilarity for each pair of feature vectors, wherein a first feature vector of each pair is associated with a first feature point on the first image, and a second feature vector of each pair is associated with a second feature point on the second image. A correspondence mapping for each pair of feature points is determined using the feature dissimilarity associated with each feature point pair, and an image transformation is defined to align the second image with the first image using one or more pairs of feature points that are least dissimilar.
    Type: Grant
    Filed: September 21, 2004
    Date of Patent: August 5, 2008
    Assignee: Siemens Medical Solutions USA, Inc.
    Inventors: Chenyang Xu, Xiaolei Huang, Frank Sauer, Christophe Chefd'hotel, Jens Gühring, Sebastian Vogt, Yiyong Sun
  • Patent number: 7362920
    Abstract: A method of aligning a pair of images includes providing a pair of images with a first image and a second image, wherein the images comprise a plurality of intensities corresponding to a domain of points in a D-dimensional space. Salient feature regions are identified in both the first image and the second image, a correspondence between each pair of salient feature regions is hypothesized, wherein a first region of each pair is on the first image and a second region of each pair is on the second image, the likelihood of the hypothesized correspondence of each pair of feature regions is measured, and a joint correspondence is determined from a set of pairs of feature regions with the greatest likelihood of correspondence.
    Type: Grant
    Filed: September 21, 2004
    Date of Patent: April 22, 2008
    Assignee: Siemens Medical Solutions USA, Inc.
    Inventors: Chenyang Xu, Xiaolei Huang, Yiyong Sun, Frank Sauer
  • Publication number: 20070242901
    Abstract: A framework is disclosed for robust click-point linking, defined as estimating a single point-wise correspondence between a pair of data domains given a user-specified point in one domain. It can also be interpreted as robust and efficient interactive localized registration of a monomodal data pair. To link visually dissimilar local regions, the concept of Geometric Configuration Context (GCC) is introduced. GCC represents the spatial likelihood of the point corresponding to the click-point in the other domain. A set of scale-invariant saliency features are pre-computed for both data, and GCC is modeled by a Gaussian mixture whose component mean and width are determined as a function of the neighboring saliency features and their correspondences. This allows correspondence of dissimilar parts using only geometrical relations without comparing the local appearances. GCC models are derived for three transformation classes: 1) pure translation, 2) scaling and translation, and 3) similarity transformation.
    Type: Application
    Filed: February 12, 2007
    Publication date: October 18, 2007
    Inventors: Xiaolei Huang, Arun Krishnan, Kazunori Okada, Xiang Zhou
  • Publication number: 20070081712
    Abstract: A method for segmenting digitized images includes providing a training set comprising a plurality of digitized whole-body images, providing labels on anatomical landmarks in each image of said training set, aligning each said training set image, generating positive and negative training examples for each landmark by cropping the aligned training volumes into one or more cropping windows of different spatial scales, and using said positive and negative examples to train a detector for each landmark at one or more spatial scales ranging from a coarse resolution to a fine resolution, wherein the spatial relationship between a cropping windows of a coarse resolution detector and a fine resolution detector is recorded.
    Type: Application
    Filed: October 3, 2006
    Publication date: April 12, 2007
    Inventors: Xiaolei Huang, Xiang Zhou, Anna Jerebko, Arun Krishnan, Haiying Guan, Toshiro Kubota, Vaclav Potesil
  • Publication number: 20050249434
    Abstract: A method and system for non-rigidly registering a fixed to a moving image utilizing a B-Spline based free form deformation (FFD) model is disclosed. The methodology utilizes sparse feature correspondences to estimate an elastic deformation field in a closed form. In a multi-resolution manner, the method is able to recover small to large non-rigid deformations. The resulting deformation field is globally smooth and guarantees one-to-one mapping between the images being registered.
    Type: Application
    Filed: April 5, 2005
    Publication date: November 10, 2005
    Inventors: Chenyang Xu, Xiaolei Huang, Yiyong Sun
  • Publication number: 20050094898
    Abstract: A method of aligning a pair of images with (101) a first image and a second image, wherein said images comprise a plurality of intensities corresponding to a domain of points in a D-dimensional space includes identifying (102) feature points on both images using the same criteria, computing (103) a feature vector for each feature point, measuring a feature dissimilarity (104) for each pair of feature vectors, wherein a first feature vector of each pair is associated with a first feature point on the first image, and a second feature vector of each pair is associated with a second feature point on the second image. A correspondence mapping (105) for each pair of feature points is determined using the feature dissimilarity associated with each feature point pair, and an image transformation (106) is defined to align (108) the second image with the first image using one or more pairs of feature points that are least dissimilar.
    Type: Application
    Filed: September 21, 2004
    Publication date: May 5, 2005
    Inventors: Chenyang Xu, Xiaolei Huang, Frank Sauer, Christophe Chefd'hotel, Jens Guhring, Sebastian Vogt, Yiyong Sun
  • Publication number: 20050078881
    Abstract: A method of aligning a pair of images includes providing a pair of images with a first image and a second image, wherein the images comprise a plurality of intensities corresponding to a domain of points in a D-dimensional space. Salient feature regions are identified in both the first image and the second image, a correspondence between each pair of salient feature regions is hypothesized, wherein a first region of each pair is on the first image and a second region of each pair is on the second image, the likelihood of the hypothesized correspondence of each pair of feature regions is measured, and a joint correspondence is determined from a set of pairs of feature regions with the greatest likelihood of correspondence.
    Type: Application
    Filed: September 21, 2004
    Publication date: April 14, 2005
    Inventors: Chenyang Xu, Xiaolei Huang, Yiyong Sun, Frank Sauer
  • Patent number: 6627778
    Abstract: The present invention provides an improved selective hydrogenation process for removing C10-C16 diolefins in the product from dehydrogenation of C10-C16 paraffins to mono-olefins, which process includes bringing the mixture stream of paraffins and olefins containing C10-C16 mono-olefins and C10-C16 diolefins into contact with a specific hydrogenation catalyst in a plurality of hydrogenation reactors connected in series under the reaction conditions for hydrogenation. Hydrogen is injected into each reactor respectively. To convert the diolefins in the mixture stream of paraffins and olefins into mono-olefins, &ggr;-alumina having a specific surface area of 50-300 m2/g and a pore volume of 0.2-2.0 cm3/g is used as the supporter of the hydrogenation catalyst, palladium is supported on the supporter as the main catalyst element and an element selected from silver, gold, tin, lead or potassium is supported on the supporter as the promoter.
    Type: Grant
    Filed: April 19, 2001
    Date of Patent: September 30, 2003
    Assignees: China Petrochemical Corporation, Sinopec, Jinling Petrochemical Corporation
    Inventors: Yi Xu, Peicheng Wu, Yu Wang, Dong Liu, Zhengguo Ling, Xiaolei Huang
  • Publication number: 20020004621
    Abstract: The present invention provides an improved selective hydrogenation process for removing C10-C16 diolefins in the product from dehydrogenation of C10-C16 paraffins to mono-olefins, which process includes bringing the mixture stream of paraffins and olefins containing C10-C16 mono-olefins and C10-C16 diolefins into contact with a specific hydrogenation catalyst in a plurality of hydrogenation reactors connected in series under the reaction conditions for hydrogenation. Hydrogen is injected into each reactor respectively. To convert the diolefins in the mixture stream of paraffins and olefins into mono-olefins, &ggr;-alumina having a specific surface area of 50-300 m2/g and a pore volume of 0.2-2.0 cm3/g is used as the supporter of the hydrogenation catalyst, palladium is supported on the supporter as the main catalyst element and an element selected from silver, gold, tin, lead or potassium is supported on the supporter as the promoter.
    Type: Application
    Filed: April 19, 2001
    Publication date: January 10, 2002
    Inventors: Yi Xu, Peicheng Wu, Yu Wang, Dong Liu, Zhengguo Ling, Xiaolei Huang