Patents by Inventor Huizhong Chen
Huizhong Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11970246Abstract: A ship cabin loading capacity measurement method and apparatus thereof, comprises: acquiring point cloud measurement data of a ship cabin; optimizing the point cloud measurement data according to a predetermined point cloud data processing rule, and generating optimized ship cabin point cloud data; calculating said ship cabin point cloud data with a predetermined loading capacity calculation rule, and getting ship cabin loading capacity data. According to the ship cabin loading capacity measurement method of the present invention, the point cloud measurement data can be acquired by a lidar, and processing the point cloud measurement data of the ship cabin with a predetermined point cloud data processing law and a computation law, and as the point cloud data processing law and the computation law can be deployed in a computer device in advance, after point cloud measurement data acquisition, loading capacity of a ship cabin can be acquired quickly and precisely.Type: GrantFiled: February 5, 2021Date of Patent: April 30, 2024Assignee: Zhoushan Institute of Calibration and Testing for Quality and Technology SupervisionInventors: Huadong Hao, Cunjun Li, Xianlei Chen, Haolei Shi, Ze'nan Wu, Junxue Chen, Zhengqian Shen, Yingying Wang, Huizhong Xu
-
Patent number: 11960296Abstract: A method executable by an autonomous mobile device includes moving in a work environment, obtaining environmental data acquired by a sensing device, and determining whether the sensing device is in a suspected ineffective state based on the environmental data. The method also includes based on a determination that the sensing device is in the suspected ineffective state, rotating at a same location for a first predetermined spin angle. The method also includes obtaining an estimated rotation angle based on one or more motion parameters acquired by a dead reckoning sensor, comparing the estimated rotation angle with the first predetermined spin angle, and based on a determination that a difference between the estimated rotation angle and the first predetermined spin angle is greater than a first predetermined threshold value, executing escape instructions to move backwardly for a first predetermined distance and move along a curve or a folded line.Type: GrantFiled: August 16, 2021Date of Patent: April 16, 2024Assignee: QFEELTECH (BEIJING) CO., LTD.Inventors: Shuailing Li, Wulin Tian, Huizhong An, Xin Wu, Yiming Zhang, Zhen Chen
-
Publication number: 20240068856Abstract: A device for measuring volume of spherical tanks, includes a base, four supporting rods, a driving mechanism, a rotating disc and a counterweight mechanism. The base is provided with a first through hole and a circular fixing base with a second through hole. The supporting rods are circumferentially provided on the fixing base, with a sliding rod slidably provided at the lower side. An outer end of the sliding rod is provided with a limiting plate. The driving mechanism can drive the sliding rods to move simultaneously in radial directions. The rotating disc with a third through hole is arranged on the fixing base, and is rotatably provided with a reel around which a steel tape is wound. A free end of the steel tape is provided with a balancing disc whose opposite sides are provided with laser rangefinders. The counterweight mechanism can keep the balancing disc stable.Type: ApplicationFiled: November 8, 2023Publication date: February 29, 2024Inventors: Huadong HAO, Junxue CHEN, Xianlei CHEN, Haolei SHI, Zenan WU, Cunjun LI, Huizhong XU, Yeyong WANG, Zhengqian SHEN, Liang LI, Yan ZHANG
-
Publication number: 20230274527Abstract: Systems and methods of the present disclosure are directed to a computer-implemented method for training a machine-learned multi-class object classification model with partially labeled training data. The method can include obtaining image data depicting objects and ground truth data comprising a subset of object class annotations respectively associated with a subset of object classes of a plurality of object classes. The method can include processing the image data with the machine-learned multi-class object classification model to obtain object classification data. The method can include evaluating a loss function that evaluates a multi-class classification loss and adjusting one or more parameters of the multi-class object classification model based on the loss function.Type: ApplicationFiled: October 6, 2020Publication date: August 31, 2023Inventors: Huizhong Chen, Zhichao Lu, Jonathan Zwi Ben-Meshulam
-
Publication number: 20230118460Abstract: A media application generates training data that includes a first set of media items and a second set of media items, where the first set of media items correspond to the second set of media items and include distracting objects that are manually segmented. The media application trains a segmentation machine-learning model based on the training data to receive a media item with one or more distracting objects and to output a segmentation mask for one or more segmented objects that correspond to the one or more distracting objects.Type: ApplicationFiled: October 18, 2022Publication date: April 20, 2023Applicant: Google LLCInventors: Orly LIBA, Nikhil KARNAD, Nori KANAZAWA, Yael Pritch KNAAN, Huizhong CHEN, Longqi CAI
-
Publication number: 20230118361Abstract: A media application receives user input that indicates one or more objects to be erased from a media item. The media application translates the user input to a bounding box. The media application provides a crop of the media item based on the bounding box to a segmentation machine-learning model. The segmentation machine-learning model outputs a segmentation mask for one or more segmented objects in the crop of the media item and a corresponding segmentation score that indicates a quality of the segmentation mask.Type: ApplicationFiled: October 18, 2022Publication date: April 20, 2023Applicant: Google LLCInventors: Orly LIBA, Navin SARMA, Yael Pritch KNAAN, Alexander SCHIFFHAUER, Longqi CAI, David JACOBS, Huizhong CHEN, Siyang LI, Bryan FELDMAN
-
Publication number: 20220254137Abstract: A computing system for detecting objects in an image can perform operations including generating an image pyramid that includes a first level corresponding with the image at a first resolution and a second level corresponding with the image at a second resolution. The operations can include tiling the first level and the second level by dividing the first level into a first plurality of tiles and the second level into a second plurality of tiles; inputting the first plurality of tiles and the second plurality of tiles into a machine-learned object detection model; receiving, as an output of the machine-learned object detection model, object detection data that includes bounding boxes respectively defined with respect to individual ones of the first plurality of tiles and the second plurality of tiles; and generating image object detection output by mapping the object detection data onto an image space of the image.Type: ApplicationFiled: August 5, 2019Publication date: August 11, 2022Inventors: Jilin Tu, Jiang Wang, Huizhong Chen, Xiangxin Zhu, Shengyang Dai
-
Patent number: 10929945Abstract: The present disclosure provides image capture devices and associated methods that feature intelligent use of hardware-generated statistics. An example image capture device can include an imaging hardware pipeline that generates frames of imagery. The imaging hardware pipeline can generate one or more hardware-generated statistics based at least in part on, for example, the raw image data captured by the image sensor or intermediate image data within the pipeline. The image capture device can analyze the hardware-generated statistics to determine one or more metrics for the raw image data or the image. The image capture device can determine a downstream operation of the image capture device relative to the image based at least in part on the metrics determined from the hardware generated statistics.Type: GrantFiled: July 28, 2017Date of Patent: February 23, 2021Assignee: Google LLCInventors: Suk Hwan Lim, Huizhong Chen, David Chen, Hsin-I Liu
-
Publication number: 20190035047Abstract: The present disclosure provides image capture devices and associated methods that feature intelligent use of hardware-generated statistics. An example image capture device can include an imaging hardware pipeline that generates frames of imagery. The imaging hardware pipeline can generate one or more hardware-generated statistics based at least in part on, for example, the raw image data captured by the image sensor or intermediate image data within the pipeline. The image capture device can analyze the hardware-generated statistics to determine one or more metrics for the raw image data or the image. The image capture device can determine a downstream operation of the image capture device relative to the image based at least in part on the metrics determined from the hardware generated statistics.Type: ApplicationFiled: July 28, 2017Publication date: January 31, 2019Inventors: Suk Hwan Lim, Huizhong Chen, David Chen, Hsin-I Liu
-
Patent number: 9542934Abstract: A computer-implemented method performed in connection with a computerized system incorporating a processing unit and a memory, the computer-implemented method involving: using the processing unit to generate a multi-modal language model for co-occurrence of spoken words and displayed text in the plurality of videos; selecting at least a portion of a first video; extracting a plurality of spoken words from the selected portion of the first video; extracting a first displayed text from the selected portion of the first video; and using the processing unit and the generated multi-modal language model to rank the extracted plurality of spoken words based on probability of occurrence conditioned on the extracted first displayed text.Type: GrantFiled: February 27, 2014Date of Patent: January 10, 2017Assignee: FUJI XEROX CO., LTD.Inventors: Matthew L. Cooper, Dhiraj Joshi, Huizhong Chen
-
Patent number: 9253511Abstract: Systems and methods are described that can provide users with personalized video content feeds. In several embodiments, a multi-modal segmentation process is utilized that relies upon cues derived from video, audio and/or text data present in a video data stream. In a number of embodiments, video streams from a variety of sources are segmented. Links are identified between video segments and between video segments and online articles containing additional information relevant to the video segments. The additional information obtained by linking a video segment to an additional source of data can be utilized in the generation of personalized playlists. In the context of news programming, the dynamic mixing and aggregation of news videos from multiple sources can greatly enrich the news watching experience. In several embodiments, processes for linking video segments to additional sources of data can be implemented as part of a video search engine service.Type: GrantFiled: July 7, 2014Date of Patent: February 2, 2016Assignee: The Board of Trustees of the Leland Stanford Junior UniversityInventors: David Mo Chen, Huizhong Chen, Maryam Daneshi, Andre Filgueiras de Araujo, Bernd Girod, Shanghsuan Tsai, Peter Vajda, Matthew Chuck-Jun Yu
-
Publication number: 20160014482Abstract: Next-generation media consumption is likely to be more personalized, device agnostic, and pooled from many different sources. Systems and methods in accordance with embodiments of the invention can provide users with personalized video content feeds providing the video content that matters most to them. In several embodiments, a multi-modal segmentation process is utilized that relies upon cues derived from video, audio and/or text data present in a video data stream. In a number of embodiments, video streams from a variety of sources are segmented. Links are identified between video segments and between video segments and online articles containing additional information relevant to the video segments. In many embodiments, video clips from video segments can be ordered and concatenated based on importance in order to generate news briefs.Type: ApplicationFiled: July 13, 2015Publication date: January 14, 2016Inventors: David Mo Chen, Huizhong Chen, Maryam Daneshi, Andre Filgueiras de Araujo, Bernd Girod, Shanghsuan Tsai, Peter Vajda, Matthew Chuck-Jun Yu
-
Publication number: 20150293928Abstract: Systems and methods are described that can provide users with personalized video content feeds. In several embodiments, a multi-modal segmentation process is utilized that relies upon cues derived from video, audio and/or text data present in a video data stream. In a number of embodiments, video streams from a variety of sources are segmented. Links are identified between video segments and between video segments and online articles containing additional information relevant to the video segments. The additional information obtained by linking a video segment to an additional source of data can be utilized in the generation of personalized playlists. In the context of news programming, the dynamic mixing and aggregation of news videos from multiple sources can greatly enrich the news watching experience. In several embodiments, processes for linking video segments to additional sources of data can be implemented as part of a video search engine service.Type: ApplicationFiled: July 7, 2014Publication date: October 15, 2015Inventors: David Mo Chen, Huizhong Chen, Maryam Daneshi, Andre Filgueiras de Araujo, Bernd Girod, Shanghsuan Tsai, Peter Vajda, Matthew Chuck-Jun Yu
-
Publication number: 20150296228Abstract: Systems and methods are described that can provide users with personalized video content feeds. In several embodiments, a multi-modal segmentation process is utilized that relies upon cues derived from video, audio and/or text data present in a video data stream. In a number of embodiments, video streams from a variety of sources are segmented. Links are identified between video segments and between video segments and online articles containing additional information relevant to the video segments. The additional information obtained by linking a video segment to an additional source of data can be utilized in the generation of personalized playlists. In the context of news programming, the dynamic mixing and aggregation of news videos from multiple sources can greatly enrich the news watching experience. In several embodiments, processes for linking video segments to additional sources of data can be implemented as part of a video search engine service.Type: ApplicationFiled: July 7, 2014Publication date: October 15, 2015Inventors: David Mo Chen, Huizhong Chen, Maryam Daneshi, Andre Filgueiras de Araujo, Bernd Girod, Shanghsuan Tsai, Peter Vajda, Matthew Chuck-Jun Yu
-
Publication number: 20150293995Abstract: Systems and methods are described that can provide users with personalized video content feeds. In several embodiments, a multi-modal segmentation process is utilized that relies upon cues derived from video, audio and/or text data present in a video data stream. In a number of embodiments, video streams from a variety of sources are segmented. Links are identified between video segments and between video segments and online articles containing additional information relevant to the video segments. The additional information obtained by linking a video segment to an additional source of data can be utilized in the generation of personalized playlists. In the context of news programming, the dynamic mixing and aggregation of news videos from multiple sources can greatly enrich the news watching experience. In several embodiments, processes for linking video segments to additional sources of data can be implemented as part of a video search engine service.Type: ApplicationFiled: July 7, 2014Publication date: October 15, 2015Inventors: David Mo Chen, Huizhong Chen, Maryam Daneshi, Andre Filgueiras de Araujo, Bernd Girod, Shanghsuan Tsai, Peter Vajda, Matthew Chuck-Jun Yu
-
Publication number: 20150243276Abstract: A computer-implemented method performed in connection with a computerized system incorporating a processing unit and a memory, the computer-implemented method involving: using the processing unit to generate a multi-modal language model for co-occurrence of spoken words and displayed text in the plurality of videos; selecting at least a portion of a first video; extracting a plurality of spoken words from the selected portion of the first video; extracting a first displayed text from the selected portion of the first video; and using the processing unit and the generated multi-modal language model to rank the extracted plurality of spoken words based on probability of occurrence conditioned on the extracted first displayed text.Type: ApplicationFiled: February 27, 2014Publication date: August 27, 2015Applicant: FUJI XEROX CO., LTD.Inventors: Matthew L. Cooper, Dhiraj Joshi, Huizhong Chen
-
Patent number: 6268198Abstract: The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.Type: GrantFiled: October 12, 2000Date of Patent: July 31, 2001Assignee: University of Georgia Research Foundation Inc.Inventors: Xin-Liang Li, Lars G. Ljungdahl, Huizhong Chen
-
Patent number: 6190189Abstract: The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.Type: GrantFiled: April 5, 1999Date of Patent: February 20, 2001Assignee: University of Georgia Research Foundation, Inc.Inventors: Xin-Liang Li, Lars G. Ljungdahl, Huizhong Chen
-
Patent number: 6184018Abstract: Provided is a novel &bgr;-glucosidase from Orpinomyces sp. PC2, nucleotide sequences encoding the mature protein and the precursor protein, and methods for recombinant production of this &bgr;-glucosidase.Type: GrantFiled: May 6, 1999Date of Patent: February 6, 2001Assignee: University of Georgia Research Foundation, Inc.Inventors: Xin-Liang Li, Lars G. Ljungdahl, Huizhong Chen, Eduardo A. Ximenes
-
Patent number: 6114158Abstract: A cDNA (1,520 bp), designated celF, consisting of an open reading frame (ORF) encoding a polypeptide (CelF) of 432 amino acids was isolated from a cDNA library of the anaerobic rumen fungus Orpinomyces PC-2 constructed in Escherichia coli. Analysis of the deduced amino acid sequence showed that starting from the N-terminus, CelF consists of a signal peptide, a cellulose binding domain (CBD) followed by an extremely Asn-rich linker region which separate the CBD and the catalytic domains. The latter is located at the C-terminus. The catalytic domain of CelF is highly homologous to CelA and CelC of Orpinomyces PC-2, to CelA of Neocallimastix patriciarum and also to cellobiohydrolase IIs (CBHIIs) from aerobic fungi. However, Like CelA of Neocallimastix patriciarum, CelF does not have the noncatalytic repeated peptide domain (NCRPD) found in CelA and CelC from the same organism.Type: GrantFiled: July 17, 1998Date of Patent: September 5, 2000Assignee: University of Georgia Research Foundation, Inc.Inventors: Xin-Liang Li, Huizhong Chen, Lars G. Ljungdahl