Patents Examined by Feng Niu
  • Patent number: 11967175
    Abstract: Provided are a facial expression recognition method and system combined with an attention mechanism. The method comprises: detecting faces comprised in each video frame in a video sequence, and extracting corresponding facial ROIs, so as to obtain facial pictures in each video frame; aligning the facial pictures in each video frame on the basis of location information of facial feature points of the facial pictures; inputting the aligned facial pictures into a residual neural network, and extracting spatial features of facial expressions corresponding to the facial pictures; inputting the spatial features of the facial expressions into a hybrid attention module to acquire fused features of the facial expressions; inputting the fused features of the facial expressions into a gated recurrent unit, and extracting temporal features of the facial expressions; and inputting the temporal features of the facial expressions into a fully connected layer, and classifying and recognizing the facial expressions.
    Type: Grant
    Filed: May 23, 2023
    Date of Patent: April 23, 2024
    Assignee: CENTRAL CHINA NORMAL UNIVERSITY
    Inventors: Sannyuya Liu, Zongkai Yang, Xiaoliang Zhu, Zhicheng Dai, Liang Zhao
  • Patent number: 11962875
    Abstract: A plastic item, such as a beverage bottle, conveys two distinct digital watermarks, encoded using two distinct signaling protocols. A first, printed label watermark conveys a retailing payload, including a Global Trade Item Number (GTIN) used by a point-of-sale scanner in a retail store to identify and price the item when presented for checkout. A second, plastic texture watermark conveys a recycling payload, including data identifying the composition of the plastic. The use of two different signaling protocols assures that a point-of-sale scanner will not spend its limited time and computational resources working to decode the recycling watermark, which lacks the data needed for retail checkout. In some embodiments, a recycling apparatus makes advantageous use of both types of watermarks to identify the plastic composition of the item (e.g., relating GTIN to plastic type using an associated database), thereby increasing the fraction of items that are correctly identified for sorting and recycling.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: April 16, 2024
    Assignee: Digimarc Corporation
    Inventors: Ravi K. Sharma, Tomas Filler, Vojtech Holub, Osama M. Alattar, Hugh L. Brunk, John D. Lord
  • Patent number: 11960576
    Abstract: Videos captured in low light conditions can be processed in order to identify an activity being performed in the video. The processing may use both the video and audio streams for identifying the activity in the low light video. The video portion is processed to generate a darkness-aware feature which may be used to modulate the features generated from the audio and video features. The audio features may be used to generate a video attention feature and the video features may be used to generate an audio attention feature. The audio and video attention features may also be used in modulating the audio video features. The modulated audio and video features may be used to predict an activity occurring in the video.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: April 16, 2024
    Assignee: Inception Institute of Artificial Intelligence Ltd
    Inventors: Yunhua Zhang, Xiantong Zhen, Ling Shao, Cees G. M. Snoek
  • Patent number: 11962876
    Abstract: A plastic item, such as a beverage bottle, conveys two distinct digital watermarks, encoded using two distinct signaling protocols. A first, printed label watermark conveys a retailing payload, including a Global Trade Item Number (GTIN) used by a point-of-sale scanner in a retail store to identify and price the item when presented for checkout. A second, plastic texture watermark conveys a recycling payload, including data identifying the composition of the plastic. The use of two different signaling protocols assures that a point-of-sale scanner will not spend its limited time and computational resources working to decode the recycling watermark, which lacks the data needed for retail checkout. In some embodiments, a recycling apparatus makes advantageous use of both types of watermarks to identify the plastic composition of the item (e.g., relating GTIN to plastic type using an associated database), thereby increasing the fraction of items that are correctly identified for sorting and recycling.
    Type: Grant
    Filed: August 3, 2021
    Date of Patent: April 16, 2024
    Assignee: Digimarc Corporation
    Inventors: Ravi K. Sharma, Tomas Filler, Vojtech Holub, Osama M. Alattar, Hugh L. Brunk, John D. Lord, Matthew M. Weaver, William Y. Conwell
  • Patent number: 11941901
    Abstract: In some examples, a smartphone or tablet includes a device able to generate a digital identifier of a copy that includes at least one print image. The device includes at least one optoelectronic detection device that detects the at least one print image and creates a representation that includes a multiplicity of pixels. The device further includes a unit that evaluates brightness intensities of the pixels, that segments the representation into multiple fields that each are composed of pixels of the representation, and that provides each of these fields with a piece of position information. The unit also ascertains, in each of these fields of adjoining pixels, a difference in their respective brightness intensities, and displays the differences as a frequency distribution. A display device further displays the digital identifier of the respective copy of the relevant printed product in the form of a graphical and/or alphanumerical display.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: March 26, 2024
    Assignee: KOENIG & BAUER AG
    Inventors: Eugen Gillich, Jan Leif Hoffmann, Jan-Friedrich Ehlenbröker, Uwe Mönks
  • Patent number: 11931116
    Abstract: A method for guiding resection of local tissue from a patient includes generating at least one image of the patient, automatically determining a plurality of surgical guidance cues indicating three-dimensional spatial properties associated with the local tissue, and generating a visualization of the surgical guidance cues relative to the surface. A system for generating surgical guidance cues for resection of a local tissue from a patient includes a location module for processing at least one image of the patient to determine three-dimensional spatial properties of the local tissue, and a surgical cue generator for generating the surgical guidance cues based upon the three-dimensional spatial properties. A patient-specific locator form for guiding resection of local tissue from a patient includes a locator form surface matching surface of the patient, and a plurality of features indicating a plurality of surgical guidance cues, respectively.
    Type: Grant
    Filed: July 25, 2022
    Date of Patent: March 19, 2024
    Assignee: THE TRUSTEES OF DARTMOUTH COLLEGE
    Inventors: Venkataramanan Krishnaswamy, Richard J. Barth, Jr., Keith D. Paulsen
  • Patent number: 11915433
    Abstract: An object tracking system according to the present disclosure includes: object position detection means for detecting a position of an object by using a sensor; object tracking parameter storage means for storing a parameter related to an erroneous detection or a non-detection caused by a detection characteristic of the sensor; object tracking means for performing tracking based on the position obtained by the object position detection means and the parameter stored in the object tracking parameter storage means; object tracking result evaluation means for calculating an evaluation index based on a result obtained by the object tracking means; and object tracking parameter updating means for determining the parameter based on the evaluation index calculated by the object tracking result evaluation means and updating the parameter stored in the object tracking parameter storage means.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: February 27, 2024
    Assignee: NEC CORPORATION
    Inventor: Yusuke Konishi
  • Patent number: 11893764
    Abstract: An angled optical pattern is decoded. To decode an optical pattern imaged at an angle, an area of interest of an image is received. A start line and an end line of the optical pattern are estimated. Corners of the optical pattern are localized. A homography is calculated based on the corners. And a scanline of the optical pattern is rectified based on the homography.
    Type: Grant
    Filed: January 23, 2023
    Date of Patent: February 6, 2024
    Assignee: Scandit AG
    Inventors: Amadeus Oertel, Yeara Kozlov, Simon Wenner
  • Patent number: 11893084
    Abstract: Disclosed herein is an object detection system, including apparatuses and methods for object detection. An implementation may include receiving a first image frame from an ROI detection model that generated a first ROI boundary around a first object detected in the first image frame and subsequently receiving a second image frame. The implementation further includes predicting, using an ROI tracking model, that the first ROI boundary will be present in the second image frame and then detecting whether the first ROI boundary is in fact present in the second image frame. The implementation includes determining that the second image frame should be added to a training dataset for the ROI detection model when detecting that the ROI detection model did not generate the first ROI boundary in the second image frame as predicted and re-training the ROI detection model using the training dataset.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: February 6, 2024
    Assignee: JOHNSON CONTROLS TYCO IP HOLDINGS LLP
    Inventors: Santle Camilus Kulandai Samy, Rajkiran Kumar Gottumukkal, Yohai Falik, Rajiv Ramanasankaran, Prantik Sen, Deepak Chembakassery Rajendran
  • Patent number: 11887302
    Abstract: A method for detecting cytopathic effect (CPE) in a well sample includes generating a well image depicting a well containing cells and a medium (and possibly viruses), and pre-processing the well image at least by partitioning the well image into sub-images each corresponding to a different portion of the well. The method also includes, for each of some or all of the sub-images, determining, by analyzing the sub-image using a convolutional neural network, a respective score indicative of a likelihood that any cells in the portion of the well corresponding to the sub-image exhibit CPE. The method further includes determining a CPE status of the cells contained in the well based on the respective scores for the sub-images, and generating output data indicating the CPE status.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: January 30, 2024
    Assignee: AMGEN INC.
    Inventors: Yu Yuan, Tony Y. Wang, Jordan P. Simons
  • Patent number: 11887405
    Abstract: A system, method, and computer-readable medium for associating a person's gestures with specific features of objects is disclosed. Using one or more image capture devices, a person's gestures and the location of that person in an environment is determined. Using determined distances between the person and objects in the environment and scales associated with features of those objects, the list of specific features in the person's field-of-view may be determined. Further, a facial expression of the person may be scored and that score associated with one or more specific features.
    Type: Grant
    Filed: August 10, 2021
    Date of Patent: January 30, 2024
    Assignee: Capital One Services, LLC
    Inventors: Steven Dang, Chih-Hsiang Chow, Elizabeth Furlan
  • Patent number: 11887070
    Abstract: Techniques for providing improved optical character recognition (OCR) for receipts are discussed herein. Some embodiments may provide for a system including one or more servers configured to perform receipt image cleanup, logo identification, and text extraction. The image cleanup may include transforming image data of the receipt by using image parameters values that optimize the logo identification, and performing logo identification using a comparison of the image data with training logos associated with merchants. When a merchant is identified, a second image clean up may be performed by using image parameter values optimized for text extraction. A receipt structure may be used to categorize the extracted text. Improved OCR accuracy is also achieved by applying on format rules of the receipt structure to the extracted text.
    Type: Grant
    Filed: November 23, 2022
    Date of Patent: January 30, 2024
    Assignee: GROUPON, INC.
    Inventors: Stephen Clark Mitchell, Pavel Melnichuk
  • Patent number: 11842517
    Abstract: Described is a solution for an unlabeled target domain dataset challenge using a domain adaptation technique to train a neural network using an iterative 3D model fitting algorithm to generate refined target domain labels. The neural network supports the convergence of the 3D model fitting algorithm and the 3D model fitting algorithm provides refined labels that are used for training of the neural network. During real-time inference, only the trained neural network is required. A convolutional neural network (CNN) is trained using labeled synthetic frames (source domain) with unlabeled real depth frames (target domain). The CNN initializes an offline iterative 3D model fitting algorithm capable of accurately labeling the hand pose in real depth frames. The labeled real depth frames are used to continue training the CNN thereby improving accuracy beyond that achievable by using only unlabeled real depth frames for domain adaptation.
    Type: Grant
    Filed: April 8, 2020
    Date of Patent: December 12, 2023
    Assignee: ULTRAHAPTICS IP LTD
    Inventor: Samuel John Llewellyn Lyons
  • Patent number: 11830271
    Abstract: There is a need for more effective and efficient document processing solution. Accordingly, various embodiments of the present invention introduce various document processing optimization solutions. In one example, a method includes identifying a plurality of input pages each associated with a related input document of a plurality of input documents; for each input page of the plurality of input pages, generating a segmented page; processing each segmented page using a trained encoder model to generate a fixed-dimensional representation of the input page; determining, based at least in part on each fixed-dimensional representation, a plurality of document clusters; determining a plurality of processing groups, where each processing group is associated with one or more related document clusters of the plurality of document clusters; and performing the document processing optimization based at least in part on the plurality of processing groups.
    Type: Grant
    Filed: September 28, 2022
    Date of Patent: November 28, 2023
    Assignee: Optum Services (Ireland) Limited
    Inventor: Raja Mukherji
  • Patent number: 11812014
    Abstract: Virtual boundary processing in adaptive loop filtering (ALF) requires that padded values be substituted for unavailable pixel rows outside the virtual boundaries. Methods and apparatus are provided for virtual boundary processing in ALF that allow the use of more actual pixel values for padding than in the prior art.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: November 7, 2023
    Inventor: Madhukar Budagavi
  • Patent number: 11763578
    Abstract: A head-mounted display, a display control method, and a program that facilitate a user to understand proximity between the user and an object around the user are provided. A display block (36) is arranged in front of the eyes of the user wearing a HMD (12). In accordance with proximity between the user and an object around the user, the HMD (12) controls the display block (36) so as to have the user visually recognize a forward direction of the display block (36).
    Type: Grant
    Filed: July 6, 2022
    Date of Patent: September 19, 2023
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventors: Yuichiro Nakamura, Yasushi Okumura
  • Patent number: 11756228
    Abstract: This disclosure presents systems and methods to facilitate interaction by one or more participants with content presented across multiple distinct physical locations. A current distinct physical location of a participant may be determined. In response to determining the current distinct physical location in which the participant is located, operation of one or more content devices physically present in the current distinct physical location may be effectuated.
    Type: Grant
    Filed: July 8, 2021
    Date of Patent: September 12, 2023
    Assignee: Disney Enterprises, Inc.
    Inventor: Elliott Baumbach
  • Patent number: 11756233
    Abstract: A method of processing an image divided into a plurality of pixel blocks which are processed according to a processing sequence is provided, which comprises, for a current pixel block: determining an application area consisting of a set of pixels in blocks preceding the current block in the processing sequence, for each pixel of the application area, computing a gradient representing a directional change of an intensity at the pixel, and selecting, based on at least one of the computed gradients, an intra prediction video coding mode among a plurality of intra prediction video coding modes usable for encoding and/or decoding the current block.
    Type: Grant
    Filed: September 19, 2022
    Date of Patent: September 12, 2023
    Assignee: ATEME
    Inventors: Elie Mora, Anthony Nasrallah
  • Patent number: 11743551
    Abstract: A video caption generating method is provided to a computer device. The method includes encoding a target video by using an encoder of a video caption generating model, to obtain a target visual feature of the target video, decoding the target visual feature by using a basic decoder of the video caption generating model, to obtain a first selection probability corresponding to a candidate word, decoding the target visual feature by using an auxiliary decoder of the video caption generating model, to obtain a second selection probability corresponding to the candidate word, a memory structure of the auxiliary decoder including reference visual context information corresponding to the candidate word, determining a decoded word in the candidate word according to the first selection probability and the second selection probability, and generating a video caption according to decoded word.
    Type: Grant
    Filed: May 24, 2021
    Date of Patent: August 29, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Wenjie Pei, Jiyuan Zhang, Lei Ke, Yuwing Tai, Xiaoyong Shen, Jiaya Jia, Xiangrong Wang
  • Patent number: 11741688
    Abstract: An image processing method is provided to be performed by an electronic device. The method includes: obtaining a target image including a text object, and determining a region proposal in the target image corresponding to the text object; obtaining region proposal feature information of the region proposal, and generating an initial mask according to the region proposal feature information; and restoring the initial mask to a target binary mask, determining a mask connection region in the target binary mask, and determining a text image region associated with the text object in the target image according to the mask connection region.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: August 29, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Pei Xu, Shan Huang