Patents Examined by Stephen M. Brinich
  • Patent number: 11528525
    Abstract: This disclosure is directed to a system and method that automatically detects repeated content within multiple media items. Content providers often include content, such as an introduction, near the beginning of a media item. In some circumstances, such as in the case of a series of television episodes, the content providers use the same content in each episode of the series. By dividing the media items into portions and analyzing the portions, the systems and methods described can automatically detect the repeated content. Using the detection of the repeated content, a user interface can then allow a user to bypass the repeated content during playback.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: December 13, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Hooman Mahyar, Ryan Barlow Dall, Moussa El Chater
  • Patent number: 11527236
    Abstract: Systems and methods of script identification in audio data obtained from audio data. The audio data is segmented into a plurality of utterances. A script model representative of a script text is obtained. The plurality of utterances are decoded with the script model. A determination is made if the script text occurred in the audio data.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: December 13, 2022
    Assignee: Verint Systems Ltd.
    Inventors: Jeffrey Michael Iannone, Ron Wein, Omer Ziv
  • Patent number: 11521095
    Abstract: Disclosed are methods, apparatuses and systems for CNN network adaption and object online tracking. The CNN network adaption method comprises: transforming a first feature map into a plurality of sub-feature maps, wherein the first feature map is generated by the pre-trained CNN according to a frame of the target video; convolving each of the sub-feature maps with one of a plurality of adaptive convolution kernels, respectively, to output a plurality of second feature maps with improved adaptability; training, frame by frame, the adaptive convolution kernels.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: December 6, 2022
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Xiaogang Wang, Lijun Wang, Wanli Ouyang, Huchuan Lu
  • Patent number: 11508079
    Abstract: Input images are partitioned into non-overlapping segments perpendicular to a disparity dimension of the input images. Each segment includes a contiguous region of pixels spanning from a first edge to a second edge of the image, with the two edges parallel to the disparity dimension. In some aspects, contiguous input image segments are assigned in a “round robin” manner to a set of sub-images. Each pair of input images generates a corresponding pair of sub-image sets. Semi-global matching processes are then performed on pairs of corresponding sub-images generated from each input image. The SGM processes may be run in parallel, reducing an elapsed time to generate respective disparity sub-maps. The disparity sub-maps are then combined to provide a single disparity map of equivalent size to the original two input images.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: November 22, 2022
    Assignee: Intel Corporation
    Inventors: Wei-Yu Tsai, Amit Aneja, Maciej Adam Kaminski, Dhawal Srivastava, Jayaram Puttaswamy, Mithali Shivkumar
  • Patent number: 11461931
    Abstract: Provided are systems and methods to perform colour extraction from swatch images and to define new images using extracted colours. Source images may be classified using a deep learning net (e.g. a CNN) to indicate colour representation strength and drive colour extraction. A clustering classifier is trained to use feature vectors extracted by the net. Separately, pixel clustering is useful when extracting the colour. Cluster count can vary according to classification. In another manner, heuristics (with or without classification) are useful when extracting. Resultant clusters are evaluated against a set of (ordered) expected colours to determine a match. Instances of standardized swatch images may be defined from a template swatch image and respective extracted colours using image processing. The extracted colour may be presented in an augmented reality GUI such as a virtual try-on application and applied to a user image such as a selfie using image processing.
    Type: Grant
    Filed: April 22, 2020
    Date of Patent: October 4, 2022
    Assignee: L'Oreal
    Inventors: Eric Elmoznino, Parham Aarabi, Yuze Zhang
  • Patent number: 11449703
    Abstract: An electronic apparatus includes a controller circuit being configured to detect a first input information, the first input information being information input in an input device by a user, create a first image and display the first image on a display device, the first image being an image created based on the first input information, obtain the first input information and the first image, detect, from the storage device, second input information being same as the first input information, determine a similarity degree between the first image created based on the first input information and a second image in association with the second input information detected from the storage device, and where the similarity degree between the first image and the second image is smaller than a threshold, output a result that the similarity degree is smaller than the threshold.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: September 20, 2022
    Assignee: KYOCERA DOCUMENT SOLUTIONS INC.
    Inventors: Tatsuya Hanayama, Daijiro Kitamoto, Kentaro Okamoto
  • Patent number: 11430430
    Abstract: Systems and methods of script identification in audio data obtained from audio data. The audio data is segmented into a plurality of utterances. A script model representative of a script text is obtained. The plurality of utterances are decoded with the script model. A determination is made if the script text occurred in the audio data.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: August 30, 2022
    Assignee: Verint Systems Inc.
    Inventors: Jeffrey Michael Iannone, Ron Wein, Omer Ziv
  • Patent number: 11430241
    Abstract: In an entry field extraction device, a learning unit obtains a learning model by learning, from images of a plurality of documents, features corresponding to types of the documents. A feature field extraction unit extracts, from an image of a document sample, a feature field being a field indicating a feature corresponding to a type of the document sample, using the learning model. An entry field extraction unit extracts an entry field, being a field of an entry column, from a field that remains in the image of the document sample from which the feature field extracted by the feature field extraction unit has been excluded.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: August 30, 2022
    Assignee: Mitsubishi Electric Corporation
    Inventors: Mitsuhiro Matsumoto, Eri Kataoka, Yosuke Watanabe, Shunsuke Yamamoto, Mikihito Kanno, Takamichi Koide
  • Patent number: 11398057
    Abstract: The present disclosure relates to an imaging system and a detection method. The detection method includes the following steps. Receiving, by a processing unit of the image system, multiple recognition label data sets transmitted from multiple terminal devices. Determining a matching degree value between the recognition label data sets and an image data, and obtaining multiple weight values from a storage unit corresponding to the terminal devices. Setting the weight values and the corresponding matching degree values as multiple label points, and classifying the marker points into multiple cluster groups by a clustering algorithm. Calculating a centroid of the largest cluster group. The centroid of the largest cluster group corresponds to a clustering weight value and a clustering matching value. When the clustering weight value or the clustering matching value meets an adjustment condition, adjusting a neural network unit according to the largest cluster group.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: July 26, 2022
    Assignee: INSTITUTE FOR INFORMATION INDUSTRY
    Inventor: Jun-Dong Chang
  • Patent number: 11379665
    Abstract: Systems and methods for generation and use of document analysis architectures are disclosed. A model builder component may be utilized to receiving user input data for labeling a set of documents as in class or out of class. That user input data may be utilized to train one or more classification models, which may then be utilized to predict classification of other documents. Trained models may be incorporated into a model taxonomy for searching and use by other users for document analysis purposes.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: July 5, 2022
    Assignee: AON RISK SERVICES, INC. OF MARYLAND
    Inventors: William Michael Edmund, John E. Bradley, III
  • Patent number: 11379967
    Abstract: Methods and systems for improved detection and classification of defects of interest (DOI) is realized based on values of one or more automatically generated attributes derived from images of a candidate defect. Automatically generated attributes are determined by iteratively training, reducing, and retraining a deep learning model. The deep learning model relates optical images of candidate defects to a known classification of those defects. After model reduction, attributes of the reduced model are identified which strongly relate the optical images of candidate defects to the known classification of the defects. The reduced model is subsequently employed to generate values of the identified attributes associated with images of candidate defects having unknown classification. In another aspect, a statistical classifier is employed to classify defects based on automatically generated attributes and attributes identified manually.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: July 5, 2022
    Assignee: KLA Corporation
    Inventors: Jacob George, Saravanan Paramasivam, Martin Plihal, Niveditha Lakshmi Narasimhan, Sairam Ravu, Prasanti Uppaluri
  • Patent number: 11373636
    Abstract: The present invention extends to methods, systems, and computer program products for expanding semantic classes via user feedback. Aspects of the invention learn how a set of labels can be expanded from user-generated tags. Text labels applied by human reviewers to digital content can be inspected and compared to one another. When a threshold of human-generated text tags contain similar terminology, the set of labels can be expanded to define a representation of the similar terminology. Similar terminology can include terms that originate from the same base term, are synonyms, are more specific terms related to a general term category, etc. Similar terminology can be consolidated into a defining term that is used to generate a new (more granular) label or a new top level label. Accordingly, new semantic classes can be discovered from user-generated feedback. New semantic classes can provide a more granular representation of content item classification.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: June 28, 2022
    Assignee: Discord Inc.
    Inventors: Michele Banko, Alok Puranik, Taylor Rhyne
  • Patent number: 11373055
    Abstract: The present disclosure provides a bidirectional attention-based image-text cross-modal retrieval method, applicable for cross-modal retrieval between natural image and electronic text. The present disclosure extracts initial image and text features by using a neural network, and builds a bidirectional attention module to reconstruct the initial image and text features extracted by the neural network, the reconstructed features containing richer semantic information. By using the bidirectional attention module, the present disclosure improves the conventional feature extraction process, obtaining higher-order features with richer image and text semantics, thereby realizing image-text cross-modal retrieval.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: June 28, 2022
    Assignee: XIDIAN UNIVERSITY
    Inventors: Jing Liu, Yujia Shi
  • Patent number: 11373424
    Abstract: Systems and methods for generation and use of document analysis architectures are disclosed. A model builder component may be utilized to receiving user input data for labeling a set of documents as in class or out of class. That user input data may be utilized to train one or more classification models, which may then be utilized to predict classification of other documents. Trained models may be incorporated into a model taxonomy for searching and use by other users for document analysis purposes.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: June 28, 2022
    Assignee: AON RISK SERVICES, INC. OF MARYLAND
    Inventors: Samuel Cameron Fleming, David Craig Andrews, Jared Dirk Sol, Scott Buzan, Timothy Seegan, Christopher Ali Mirabzadeh
  • Patent number: 11367305
    Abstract: A facial recognition process operating on a device may include one or more processes that determine if a camera and/or components associated with the camera are obstructed by an object (e.g., a user's hand or fingers). Obstruction of the device may be assessed using flood infrared illumination images when a user's face is not able to be detected by a face detection process operating on the device. Obstruction of the device may also be assessed using a pattern detection process that operates after the user's face is detected by the face detection process. When obstruction of the device is detected, the device may provide a notification to the user that the device (e.g., the camera and/or an illuminator) is obstructed and that the obstruction should be removed for the facial recognition process to operate correctly.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: June 21, 2022
    Assignee: Apple Inc.
    Inventors: Touraj Tajbakhsh, Jonathan Pokrass, Feng Tang
  • Patent number: 11367308
    Abstract: There are provided a comparison device and a comparison method including determining whether or not a comparison target person is a subject of a registered face image by comparing an imaged face image to the registered face image, determining whether or not a blocking object is provided in the imaged face image, determining whether or not removing the blocking object is required, by calculating a partial similarity score between the imaged face image and the registered face image in a partial area corresponding to the blocking object, and urging the blocking object to be detached based on the partial similarity score.
    Type: Grant
    Filed: May 29, 2017
    Date of Patent: June 21, 2022
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Yosuke Nozue, Kazuki Maeno, Hiroaki Yoshio
  • Patent number: 11341957
    Abstract: A method for detecting a keyword, applied to a terminal, includes: extracting a speech eigenvector of a speech signal; obtaining, according to the speech eigenvector, a posterior probability of each target character being a key character in any keyword in an acquisition time period of the speech signal; obtaining confidences of at least two target character combinations according to the posterior probability of each target character; and determining that the speech signal includes the keyword upon determining that all the confidences of the at least two target character combinations meet a preset condition. The target character is a character in the speech signal whose pronunciation matches a pronunciation of the key character. Each target character combination includes at least one target character, and a confidence of a target character combination represents a probability of the target character combination being the keyword or a part of the keyword.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: May 24, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yi Gao, Meng Yu, Dan Su, Jie Chen, Min Luo
  • Patent number: 11341743
    Abstract: A computer implemented method and apparatus for a marine vessel data system, the method comprising: receiving data from at least one sensor configured to measure vibration and operationally arranged to the marine vessel to provide time-domain reference sensor data; maintaining the time-domain reference sensor data within a data storage system; generating a Fast Fourier Transform (FFT) on the time-domain reference sensor data to provide a plurality of reference spectra files in frequency-domain, wherein each reference spectra file comprises spectra data defined by amplitude information and frequency information, and each spectra file is associated with condition information determined based on collection of the time-domain reference sensor data; normalizing each reference spectra file by converting the frequency information to order information using the condition information to provide normalized reference spectra files; and training a convolutional autoencoder type of neural network using the normalized refe
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: May 24, 2022
    Assignee: Wärtsilä Finland Oy
    Inventor: Athanasios Siganos
  • Patent number: 11341632
    Abstract: A method is for obtaining at least one feature of interest, especially a biomarker, from an input image acquired by a medical imaging device. The at least one feature of interest is the output of a respective node of a machine learning network, in particular a deep learning network. The machine learning network processes at least part of the input image as input data. The used machine learning network is trained by machine learning using at least one constraint for the output of at least one inner node of the machine learning network during the machine learning.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: May 24, 2022
    Assignee: SIEMENS HEALTHCARE GMBH
    Inventors: Alexander Muehlberg, Rainer Kaergel, Alexander Katzmann, Michael Suehling
  • Patent number: 11341759
    Abstract: A device may receive a target document. The device may segment the target document into multiple segments. The device may determine, for each segment of the multiple segments, a set of color parameters for a corresponding set of pixels included in that segment. The device may determine, for each segment of the multiple segments, an average color parameter for that segment based on the set of color parameters for the corresponding set of pixels included in that segment. The device may generate a target color profile for the target document based on determining the average color parameter for each segment. The device may compare the target color profile and a model color profile associated with classifying the target document. The device may classify the target document based on comparing the target color profile and the model color profile.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: May 24, 2022
    Assignee: Capital One Services, LLC
    Inventor: Thomas Sickert