Trainable Classifiers Or Pattern Recognizers (e.g., Adaline, Perceptron) Patents (Class 382/159)
  • Patent number: 11049302
    Abstract: Methods, systems, and media are provided for redacting images using augmented reality. An image, such as a photograph or video, may be captured by a camera. A redaction marker, such as a QR code, may be included in the field of view of the camera. Redaction instructions may be interpreted from the redaction marker. The redaction instructions indicate a portion of a real-world environment that is to be redacted, such as an area that is not allowed to be imaged. Based on the redaction instructions, image data corresponding to the image may be redacted by deleting or encrypting a portion of the data associated with the portion of the real-world environment to be redacted. An image may be rendered using the redacted image data. The redacted image may be displayed or stored.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: June 29, 2021
    Assignee: RealWear, Inc.
    Inventor: Andrew Michael Lowery
  • Patent number: 11042598
    Abstract: Methods and systems for capturing, collecting, analyzing and auditing of electronic documents. In an embodiment, there is provided the ability to present an audit function or “click thru” capability with respect to image files, non-structured text, non-structured html, and pdf document.
    Type: Grant
    Filed: October 16, 2015
    Date of Patent: June 22, 2021
    Assignee: Refinitiv US Organization LLC
    Inventors: Alvin Ohlenbusch, Parvez Naqvi, Raymond Maxwell, Bou-Kau Yang, Alan Kelly
  • Patent number: 11037021
    Abstract: Embodiments of the present disclosure describe a clustering scheme and system for partitioning a collection of objects, such as documents or images, using graph edges, identification of reliable cluster groups, and replacement of reliable cluster groups with prototypes to reconstruct a graph. The process is iterative and continues until the set of edges is reduced to a predetermined value.
    Type: Grant
    Filed: October 10, 2017
    Date of Patent: June 15, 2021
    Assignee: KING ABDULLAH UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventor: Ibrahim Mansour Alabdulmohsin
  • Patent number: 11030471
    Abstract: This application provides a text detection method, including: obtaining, by a computer device, an image; inputting the image into a neural network, and outputting a target feature matrix; inputting the target feature matrix into a fully connected layer, the fully connected layer mapping each element of the target feature matrix to a predicated subregion corresponding to the image according to a preset anchor; and obtaining text feature information of the predicated subregion, connecting the predicated subregion into a corresponding predicted text line according to the text feature information of the predicated subregion by using a text clustering algorithm, and determining a text area corresponding to the image.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: June 8, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Ming Liu
  • Patent number: 11030492
    Abstract: A previously trained classification model associated with the machine learning system is configured to process an input to generate i) a first prediction that represents a characteristic associated with the input, and ii) a representation of accuracy associated with the prediction. A retraining subsystem is configured to receive the input, the first prediction, and the representation of accuracy. The retraining subsystem processes the input to generate a prediction representing a characteristic. A sufficiency of certainty of the first prediction is determined based on at least the input, the first prediction, the measure of accuracy, and the second prediction. Based at least on the determined sufficiency the retraining subsystem causes the machine learning system to be automatically retrained, be retrained using the input with active learning or not retrained.
    Type: Grant
    Filed: January 16, 2019
    Date of Patent: June 8, 2021
    Assignee: CLARIFAI, INC.
    Inventors: Matthew Zeiler, Jesse Rappaport, Samuel Dodge, Michael Gormish
  • Patent number: 11030728
    Abstract: An electronic device may be provided with a display. A content generator such as a camera may capture images in high dynamic range mode or standard dynamic range mode. The images may have associated image metadata such as face detection information, camera settings, color and luminance histograms, and image classification information. Control circuitry in the electronic device may determine tone mapping parameters for the captured images based on the image metadata. The tone mapping parameters for a given image may be stored with the image in the metadata file. When it is desired to display the image, the control circuitry may apply a tone mapping process to the image according to the stored tone mapping parameters. The algorithm that is used to determine tone mapping parameters based on image metadata may be based on user preference data gathered from a population of users.
    Type: Grant
    Filed: September 11, 2018
    Date of Patent: June 8, 2021
    Assignee: Apple Inc.
    Inventors: Ramin Samadani, Gregory B. Abbas, Teun R. Baar, Nicolas P. Bonnier, Victor M. de Araujo Oliveira, Zuo Xia
  • Patent number: 11030724
    Abstract: Provided is an image restoration apparatus and method. The image restoration apparatus may store an image restoration model including a convolutional layer corresponding to kernels having various dilation gaps, and may restore an output image from a target image obtained by rearranging a compound eye vision (CEV) image by the image restoration model.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: June 8, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yang Ho Cho, Deokyoung Kang, Dong Kyung Nam
  • Patent number: 11024341
    Abstract: A clip of shots is uploaded to a conformance platform. The conformance platform evaluates the clip type and initiates shot boundary evaluation and detection. The identified shot boundaries are then seeded for OCR evaluation and the burned in metadata is extracted into categories using a custom OCR module based on the location of the burn-ins within the frame. The extracted metadata is then error corrected based on OCR evaluation of the neighboring frame and arbitrary frames at pre-computed timecode offsets from the frame boundary. The error corrected metadata and categories are then packaged into a metadata package and returned back to a conform editor. The application then presents the metadata package as an edit decision list with associated pictures and confidence level to the user. The user can further validate and override the edit decision list if necessary and then use it to directly to conform to the online content.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: June 1, 2021
    Assignee: COMPANY 3 / METHOD INC.
    Inventors: Marvin Boonmee, Weyron Henriques
  • Patent number: 11023824
    Abstract: Methods, apparatus, and machine-readable mediums are described for selecting a training set from a larger data set. Samples are divided into a training set and a validation set. Each set meets one or more conditions. For each class to be modeled, multiple training sets are created. Models are trained on each of the multiple training sets. A size of samples for each class is determined based upon the trained models. A training data set that includes a number of samples based upon the determined size of samples is created.
    Type: Grant
    Filed: August 30, 2017
    Date of Patent: June 1, 2021
    Assignee: Intel Corporation
    Inventor: Luis Sergio Kida
  • Patent number: 11017266
    Abstract: Image annotation includes: accessing an image and a plurality of annotation data sets for the image, wherein the plurality of annotation data sets are made by a plurality of contributors, and the image has a plurality of original image channels; aggregating the plurality of annotation data sets to obtain an aggregated annotation data set for the image; and outputting the aggregated annotation data set. Aggregating the plurality of annotation data sets to obtain an aggregated annotation data set for the image includes: generating an additional image channel based at least in part on weight averages of confidence measures of the plurality of contributors; and applying an object detection model to at least a part of the plurality of original image channels and at least a part of the additional image channel to generate the aggregated annotation data set.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: May 25, 2021
    Assignee: Figure Eight Technologies, Inc.
    Inventors: Humayun Irshad, Seyyedeh Qazale Mirsharif, Kiran Vajapey, Monchu Chen, Caiqun Xiao, Robert Munro
  • Patent number: 11017129
    Abstract: Aspects provide for design template selectors, wherein processors are configured to determine a design pattern from a user input comprising a spatial arrangement of different discrete constituent design components, and determine that the design pattern input spatial arrangement of constituent components matches a portion of a selected one of a knowledge base plurality of completed design patterns that each comprise different fixed spatial arrangements of discrete constituent components within a threshold amount of confidence. Thus, aspects present the selected one of the knowledge base design patterns to the user as a suggested template for use in completing the design.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: May 25, 2021
    Assignee: International Business Machines Corporation
    Inventors: Luis Carlos Cruz Huertas, Rick A. Hamilton, II, Ninad Sathaye, Edgar A. Zamora Duran
  • Patent number: 11010664
    Abstract: Systems, methods, devices, and other techniques are disclosed for using an augmented neural network system to generate a sequence of outputs from a sequence of inputs. An augmented neural network system can include a controller neural network, a hierarchical external memory, and a memory access subsystem. The controller neural network receives a neural network input at each of a series of time steps processes the neural network input to generate a memory key for the time step. The external memory includes a set of memory nodes arranged as a binary tree. To provide an interface between the controller neural network and the external memory, the system includes a memory access subsystem that is configured to, for each of the series of time steps, perform one or more operations to generate a respective output for the time step. The capacity of the neural network system to account for long-range dependencies in input sequences may be extended.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: May 18, 2021
    Assignee: DeepMind Technologies Limited
    Inventors: Karol Piotr Kurach, Marcin Andrychowicz
  • Patent number: 11010883
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for automated analysis of petrographic thin section images. In one aspect, a method includes determining a first image of a petrographic thin section of a rock sample, and determining a feature vector for each pixel of the first image. Multiple different regions of the petrographic thin section are determined by clustering the pixels of the first image based on the feature vectors, wherein one of the regions corresponds to grains in the petrographic thin section. The method further includes determining a second image of the petrographic thin section, including combining images of the petrographic thin section acquired with plane-polarized light and cross-polarized light. Multiple grains are segmented from the second image of the petrographic thin section based on the multiple different regions from the first image, and characteristics of the segmented grains are determined.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: May 18, 2021
    Assignee: Saudi Arabian Oil Company
    Inventors: Fatai A. Anifowose, Mokhles Mustapha Mezghani
  • Patent number: 11000107
    Abstract: The present disclosure provides systems and methods for virtual facial makeup simulation through virtual makeup removal and virtual makeup add-ons, virtual end effects and simulated textures. In one aspect, the present disclosure provides a method for virtually removing facial makeup, the method comprising providing a facial image of a user with makeups being applied thereto, locating facial landmarks from the facial image of the user in one or more regions, decomposing some regions into first channels which are fed to histogram matching to obtain a first image without makeup in that region and transferring other regions into color channels which are fed into histogram matching under different lighting conditions to obtain a second image without makeup in that region, and combining the images to form a resultant image with makeups removed in the facial regions.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: May 11, 2021
    Assignee: Shiseido Company, Limited
    Inventors: Yun Fu, Bin Sun, Haiyi Mao
  • Patent number: 11003327
    Abstract: Embodiments are also provided for displaying an image capturing mode and a content viewing mode. In some embodiments, one or more live images may be received from an image capturing component on a mobile device. A user interface may display the live images on a touch-sensing display interface of the mobile device. A first gesture may also be detected with the touch-sensing display interface. In response to detecting the first gesture, at least a portion of a collection of content items may be displayed within a first region of the user interface, and/or the one or more live images may be displayed within a second region of the user interface.
    Type: Grant
    Filed: October 2, 2017
    Date of Patent: May 11, 2021
    Assignee: DROPBOX, INC.
    Inventors: Stephen Poletto, Yi Wei, Joshua Puckett
  • Patent number: 11003961
    Abstract: Provided is an image processing system, an image processing method, and a program for preferably detecting a mobile object. The image processing system includes: an image input unit for receiving an input for some image frames having different times in a plurality of image frames constituting a picture, which is of a pixel on which the mobile object appears or a pixel on which the mobile object does not appear, for selected arbitrary one or more pixels in an image frame at the time of processing; and a mobile object detection model constructing unit for learning a parameter for detecting the mobile object based on the input.
    Type: Grant
    Filed: June 2, 2015
    Date of Patent: May 11, 2021
    Assignee: NEC CORPORATION
    Inventor: Hiroyoshi Miyano
  • Patent number: 10997466
    Abstract: An image segmentation method system, the system comprising: a training subsystem configured to train a segmentation machine learning model using annotated training data comprising images associated with respective segmentation annotations, so as to generate a trained segmentation machine learning model; a model evaluator; and a segmentation subsystem configured to perform segmentation of a structure or material in an image using the trained segmentation machine learning model. The model evaluator is configured to evaluate the segmentation machine learning model by (i) controlling the segmentation subsystem to segment at least one evaluation image associated with an existing segmentation annotation using the segmentation machine learning model and thereby generate a segmentation of the annotated evaluation image, and (ii) forming a comparison of the segmentation of the annotated evaluation image and the existing segmentation annotation.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: May 4, 2021
    Assignee: STRAXCIRO PTY. LTD.
    Inventor: Yu Peng
  • Patent number: 10997407
    Abstract: Example implementations relate to detecting document objects. For example, detecting document objects may include a system comprising a pre-processing engine to establish a threshold for a document, wherein a structure of the document is unknown, a detection engine to detect a candidate area in the document, using a Hough transform and connected component analysis to merge detected candidate areas in the Hough transform to a same document object in the document, and a classification module to classify the candidate area as a document object or not a document object.
    Type: Grant
    Filed: October 2, 2015
    Date of Patent: May 4, 2021
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Perry Lea, Eli Saber, Osborn De Lima, David C. Day, Peter Bauer, Mark Shaw, Roger S. Twede, Shruty Janakiraman
  • Patent number: 10997405
    Abstract: A method, apparatus and computer program product are provided for classifying pages of a document with a linear regression model, and a deep learning (non-linear) model utilizing a neural network. The classification of each page is determined by determining which of the linearly predicted category or the non-linearly predicted category to use, such as for transmission to an auditor. Pages of medical records generated by concatenating reports from distinct sources are classified according to both models, and embodiments determine which classification should be used. The classifications may be optionally smoothed for continuity. The classifications may be sent to auditors and used to review and audit the medical records.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: May 4, 2021
    Assignee: CHANGE HEALTHCARE HOLDINGS LLC
    Inventors: Adrian Lam, Alex Londeree, Thomas Corcoran, Adam Sullivan, Nick Giannasi
  • Patent number: 10990853
    Abstract: An information processing method and an information processing apparatus are disclosed, where the information processing method includes: inputting a plurality of samples to a classifier respectively, to extract a feature vector representing a feature of each sample; and updating parameters of the classifier by minimizing a loss function for the plurality of samples, wherein the loss function is in positive correlation with an intra-class distance for representing a distance between feature vectors of samples belonging to a same class, and is in negative correlation with an inter-class distance for representing a distance between feature vectors of samples belonging to different classes, wherein the intra-class distance of each sample of the plurality of samples is less than a first threshold, the inter-class distance between two different classes is greater than a second threshold, and the second threshold is greater than twice the first threshold.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: April 27, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Mengjiao Wang, Rujie Liu
  • Patent number: 10990800
    Abstract: The present disclosure relates to a display device and its display device, an electronic picture frame and a computer readable storage medium. The display device includes: a processor configured to acquire an environmental image of the environment where the display device is located, identify a category of the environmental image, and determine one or more pictures matching the category from a picture library; and a display configured to display at least one of the determined pictures.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: April 27, 2021
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventor: Xiaohong Wang
  • Patent number: 10991101
    Abstract: The example embodiments are directed to refinement process for generating an accurate image segmentation map. A refinement network may enhance an initially generated segmentation map using a model that is trained using synthetic images. In one example, the method may include storing an image of content which includes a plurality of categories of data, receiving an initial segmentation map of the image, the initial segmentation map comprising pixel probability values with respect to the plurality of categories, executing a refinement predictive model on the initial segmentation map and the image to generate a refined segmentation map, wherein the predictive model is trained using synthetic images of the plurality of categories of data, and generating a segmented image based on the refined segmentation map.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: April 27, 2021
    Assignee: General Electric Company
    Inventors: Rafi Shmuel Brada, Ron Wein, Gregory Wilson, Alberto Santamaria-Pang, Leonid Gugel
  • Patent number: 10984280
    Abstract: The present invention relates to a system and method for determining the presence of objects in an image. The techniques used process pixel data within the image by a relatively small number of pixel rows at a time. The angle and magnitude date from the pixels within an image are redistributed into a plurality histogram of magnitude bins associated with groupings of pixels. Once enough groupings of pixels equivalent to the height of a Block worth of pixels have been made, partial Support Vector Machine (SVM) calculations are performed on that Block worth of pixels. This is repeated until there are sufficient partial results equivalent to the height of the feature window, and then a full SVM calculation is performed. This process then may be used to scan across the whole image to determine the presence of objects within it.
    Type: Grant
    Filed: November 24, 2017
    Date of Patent: April 20, 2021
    Inventors: Ivan Griffin, David O'Reilly, John J. Guiry
  • Patent number: 10984030
    Abstract: A computer-implemented method, a cognitive intelligence system and computer program product adapt a relational database containing multiple data types. Non-text tokens in the relational database are converted to a textual form. Text is produced based on relations of tokens in the relational database. A set of pre-trained word vectors for the text is retrieved from an external database. The set of pre-trained word vectors is initialized for tokens common to both the relational database and an external database. The set of pre-trained vectors is used to create a cognitive intelligence query expressed as a structure query language (SQL) query. Content of the relational database is used for training while initializing the set of pre-trained word vectors for tokens common to both the relational database and the external database. The first set of word vectors may be immutable or mutable with updates controlled via parameters.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: April 20, 2021
    Assignee: International Business Machines Corporation
    Inventors: Rajesh Bordawekar, Oded Shmueli
  • Patent number: 10977106
    Abstract: Examples for detecting anomalies in a dataset are provided herein. A decision tree is trained using the data set and partitions of the data set produced by the trained decision tree are identified. Further, subsets of data based at least on the partitions of the data set are identified and z-scores are computed for the subsets of data. Based at least on the subsets of data, a subset of data with a highest z-score is identified as an anomalous subset of data, and the anomalous subset of data is provided for display.
    Type: Grant
    Filed: February 9, 2018
    Date of Patent: April 13, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Anna S. Bertiger, Alexander V. Moore, Adam E. Shirey
  • Patent number: 10977521
    Abstract: The present invention relates to the field of pedestrian detection, and particularly relates to a multi-scale aware pedestrian detection method based on an improved full convolutional network. Firstly, a deformable convolution layer is introduced in a full convolutional network structure to expand a receptive field of a feature map. Secondly, a cascade-region proposal network is used to extract multi-scale pedestrian proposals, discriminant strategy is introduced, and a multi-scale discriminant layer is defined to distinguish pedestrian proposals category. Finally, a multi-scale aware network is constructed, a soft non-maximum suppression algorithm is used to fuse the output of classification score and regression offsets by each sensing network to generate final pedestrian detection regions.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: April 13, 2021
    Assignee: JIANGNAN UNIVERSITY
    Inventors: Li Peng, Hui Liu, Jiwei Wen, Linbai Xie
  • Patent number: 10970953
    Abstract: A novel method and apparatus for face authentication is disclosed. The disclosed method comprises detecting a motion by a subject within a predetermined area of view, assigning a unique session identification number to the subject detected within a predetermined area of view, detecting a facial area of the subject detected within a predetermined area of view, generating an image of the facial area of the subject, assessing a quality of the image of the facial area of the subject, conducing an incremental training of the image of the facial area of the subject, determining an identity of the subject based on the image of the facial area of the subject, identifying an intent of the subject, and authorizing access to a point of entry based on the determined identity of the subject and based on the intent of the subject.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: April 6, 2021
    Inventor: Luv Tulsidas
  • Patent number: 10970554
    Abstract: Methods and systems are provided for automatically producing highlights videos from one or more video streams of a playing field. The video streams are captured from at least one camera, calibrated and raw inputs are obtained from audio, calibrated videos and actual event time. Features are then extracted from the calibrated raw inputs, segments are created, specific events are identified and highlights are determined and the highlights are outputted for consumption, considering diverse types of packages. Types of packages may be based on user preference. The calibrated video streams may be received and processed in real time, periodically.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: April 6, 2021
    Assignee: PIXELLOT LTD.
    Inventors: Gal Oz, Yoav Liberman, Avi Shauli
  • Patent number: 10963758
    Abstract: The present disclosure provides an improved approach to implement structure learning of neural networks by exploiting correlations in the data/problem the networks aim to solve. A greedy approach is described that finds bottlenecks of information gain from the bottom convolutional layers all the way to the fully connected layers. Rather than simply making the architecture deeper, additional computation and capacitance is only added where it is required.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: March 30, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Andrew Rabinovich, Vijay Badrinarayanan, Daniel Detone, Srivignesh Rajendran, Douglas Bertram Lee, Tomasz Malisiewicz
  • Patent number: 10961825
    Abstract: A system comprising a drilling rig having a rig floor, a derrick, a master control computer system and at least one camera, the at least one camera capturing a master image of at least a portion of the rig floor, sending the master image to the master control computer, the master control computer system mapping said master image into a model to facilitate control of items on said drilling rig.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: March 30, 2021
    Assignee: NATIONAL OILWELL VARGO NORWAY AS
    Inventors: Hugo Leonardo Rosano, Stig Vidar Trydal, Erik Haavind, Frode Jensen
  • Patent number: 10963717
    Abstract: A computer implemented method and system for correcting error produced by Optical Character Recognition (OCR) of text contained in an image encoded document. An error model representing frequency and type of errors produced by Optical Character Recognition Engine is generated. An OCR character string generated by OCR is retrieved. A user-defined pattern of a plurality of character strings is retrieved, where each character string represents a possible correct representation of characters in the OCR character string. The OCR character string is compared to each of the above generated character strings and a ‘likelihood score’ is calculated based on the information from the error model. The character string with the highest ‘likelihood score’ is presumed to be the corrected version of the OCR character string.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: March 30, 2021
    Assignee: Automation Anywhere, Inc.
    Inventors: Thomas Corcoran, Vibhas Gejji, Stephen Van Lare
  • Patent number: 10963754
    Abstract: Techniques for training an embedding using a limited training set are described. In some examples, the embedding is trained by generating a plurality of vectors from a random sample of the limited set of training data classes using a layer of the particular machine learning classification model, randomly selecting samples from the plurality of vectors into a set of samples, computing at least one distance for each sampled class from a center parameter for the class using the set of samples, generating a discrete probability distribution over the classes for a query point based on distances to a center parameter for each of the classes in the embedding space, calculating a loss value for the modified prototypical network, the calculation of the loss value being for a fixed geometry of the embedding space and including a measure of the difference between distributions, and back propagating.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: March 30, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Avinash Aghoram Ravichandran, Paulo Ricardo dos Santos Mendonca, Rahul Bhotika, Stefano Soatto
  • Patent number: 10964077
    Abstract: An apparatus for clustering a point cloud can include: a three-dimensional (3D) light detection and ranging (LiDAR) sensor configured to generate a point cloud around a vehicle and a controller configured to project the point cloud generated by the 3D LiDAR sensor onto a circular grid map to be converted into two-dimensional (2D) points, the circular grid map including a plurality of cells, and to cluster the 2D points on the circular grid map.
    Type: Grant
    Filed: August 21, 2019
    Date of Patent: March 30, 2021
    Assignees: Hyundai Motor Company, Kia Motors Corporation
    Inventor: Jae Kwang Kim
  • Patent number: 10956181
    Abstract: Techniques for auto-executing instructions provided in a video on a computing platform are provided. A script is developed from audio provided in the video. Text shown in frames of the video is extracted. Simulated user interaction (UI) events present in the video are identified. A timeline representation is generated to include entries for elements of the script and the extracted text, and identified UI events. Like elements are collected into common entries. Each entry in the script that lacks an associated UI event but is likely to involve a user action prompt is identified. Each entry having an associated identified UI event, and each entry identified as likely to involve a user action prompt, is converted into a corresponding user action command representation. Each user action command representation is mapped to a computing platform executable command, each being performed using processing resources of the computing platform, automatically, without user intervention.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: March 23, 2021
    Assignee: Software AG
    Inventor: Abhinandan Ganapati Banne
  • Patent number: 10956832
    Abstract: A method is provided to produce training data set for training an inference engine to predict events in a data center comprising: producing probe vectors corresponding to components of a data center, each probe vector including a sequence of data elements, one of the probe vectors indicating an event at a component and at a time of the event; and producing at a master device a set of training snapshots, wherein each training snapshot includes a subsequence of data elements that corresponds to a time increment that matches or that occurred not later than the indicated time of occurrence of the event.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: March 23, 2021
    Assignee: Platina Systems Corporation
    Inventors: Frank Szu-Jen Yang, Ramanagopal V. Vogety, Sharad Mehrotra
  • Patent number: 10949071
    Abstract: Various systems and methods are provided that display various geographic maps and depth graphs in an interactive user interface in substantially real-time in response to input from a user in order to determine information related to measured data points, depth levels, and geological layers and provide the determined information to the user in the interactive user interface. For example, a computing device may be configured to retrieve data from one or more databases and generate one or more interactive user interfaces. The one or more interactive user interfaces may display the retrieved data in a geographic map, a heat map, a cross-plot graph, or one or more depth graphs. The user interface may be interactive in that a user may manipulate any of the graphs to identify trends or current or future issues.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: March 16, 2021
    Assignee: Palantir Technologies Inc.
    Inventors: Matthew Julius Wilson, Tom Alexander, Daniel Cervelli, Trevor Fountain, Quentin Spencer-Harper, Daniel Horbatt, Guillem Palou Visa, Dylan Scott, Trevor Sontag, Kevin Verdieck, Alexander Ryan, Brian Lee, Charles Shepherd, Emily Nguyen
  • Patent number: 10936868
    Abstract: A computer-implemented method and system are disclosed for classifying an input data set within a data category using multiple data recognition tools. The method includes identifying at least a first attribute and a second attribute of the data category; classifying the at least first attribute via at least a first data recognition tool and the at least second attribute via at least a second data recognition tool, the classifying including: allocating a confidence factor for each of the at least first and second attributes that indicates a presence of each of the at least first and second attributes in the input data set; and combining outputs of the classifying into a single output confidence score by using a weighted fusion of the allocated confidence factors.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: March 2, 2021
    Assignee: BOOZ ALLEN HAMILTON INC.
    Inventors: Nathaniel Jackson Short, Jonathan M. Levitt
  • Patent number: 10936910
    Abstract: Described herein are embodiments for joint adversarial training methods that incorporate both spatial transformation-based and pixel-value based attacks for improving image model robustness. Embodiments of a spatial transformation-based attack with an explicit notion of budgets are disclosed and embodiments of a practical methodology for efficient spatial attack generation are also disclosed. Furthermore, both pixel and spatial attacks are integrated into embodiments of a generation model and the complementary strengths of each other are leveraged for improving the overall model robustness. Extensive experimental results on several benchmark datasets compared with state-of-the-art methods verified the effectiveness of the presented method.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: March 2, 2021
    Assignee: Baidu USA LLC
    Inventors: Haichao Zhang, Jianyu Wang
  • Patent number: 10929677
    Abstract: A system for detecting synthetic videos may include a server, a plurality of weak classifiers, and a strong classifier. The server may be configured to receive a prediction result from each of a plurality of weak classifiers; and send the prediction results from each of the plurality of weak classifiers to a strong classifier. The weak classifiers may be trained on real videos and known synthetic videos to analyze a distinct characteristic of a video file; detect irregularities of the distinct characteristic; generate a prediction result associated with the distinct characteristic, the prediction result being a prediction on whether the video file is synthetic; and output the prediction result to the server. The strong classifier may be trained to receive the prediction results of the plurality of weak classifiers from the server; analyze the prediction results; and determine if the video file is synthetic based on the prediction results.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: February 23, 2021
    Assignee: ZeroFOX, Inc.
    Inventors: Michael Morgan Price, Matthew Alan Price
  • Patent number: 10929775
    Abstract: A system for self-learning archival of electronic data may be provided. A binary classifier may identify a text segment of an electronic dataset in response to text of the electronic dataset being associated with indicators of a word model. A first multiclass classifier may generate a first classification set comprising respective statistical metrics for the datafield that each predefined identifier in a group of predefined identifiers is representative of the datafield. A second multiclass classifier may receive a context of the electronic dataset and generate a second classification set. A combination classifier may apply weight values to the first classification set and the second classification set and form a weighted classification set and select a predefined identifier as being representative of the datafield based on the weighted classification set. The processor may store, in a memory, a data record comprising an association between the predefined identifier and the datafield.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: February 23, 2021
    Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITED
    Inventors: Abhilash Alexander Miranda, Laura O'Malley, Pedro L. Sacristan, Urvesh Bhowan, Medb Corcoran
  • Patent number: 10929655
    Abstract: A method implemented by an computing device, the method comprising determining, by a computing device, a plurality of attributes respectively describing a region of interest corresponding to a body part of a person portrayed in the image, determining, by the computing device, a respective score for each of the plurality of attributes based on training data that comprises a plurality of pre-defined scores for each of the plurality of attributes, and computing, by the computing device, an aggregated score based on the respective scores of the plurality of attributes, the aggregated score representing an aesthetic value of the image.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: February 23, 2021
    Assignee: Futurewei Technologies, Inc.
    Inventors: Hongyu Sun, Wei Su, Xiaoxing Zhu, Fan Zhang
  • Patent number: 10930032
    Abstract: Methods, systems, and computer program products for generating concept images of human poses using machine learning models are provided herein. A computer-implemented method includes identifying one or more events from input data by applying a machine learning recognition model to the input data, wherein the identifying comprises (i) detecting multiple entities from the input data and (ii) determining one or more behavioral relationships among the multiple entities in the input data; generating, using a machine learning interpretability model and the identified events, one or more images illustrating one or more human poses related to the identified events; outputting the one or more generated images to at least one user; and updating the machine learning recognition model based at least in part on (i) the one or more generated images and (ii) input from the at least one user.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: February 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Samarth Bharadwaj, Saneem Chemmengath, Suranjana Samanta, Karthik Sankaranarayanan
  • Patent number: 10922582
    Abstract: Different aspects of the invention enable localizing planar repetitive patterns in a time and resource efficient manner by a method and device which computes a homography between the model of the planar object and the query image even in cases of high repeatability and uses multiple views of the same object in order to deal with descriptors variability when the orientation of the object changes.
    Type: Grant
    Filed: May 30, 2016
    Date of Patent: February 16, 2021
    Assignee: THE GRAFFTER S.L.
    Inventors: Luis Baumela Molina, José Miguel Buenaposada Biencinto, Roberto Valle Fernández, Miguel Angel Orellana Sanz, Jorge Remirez Miguel
  • Patent number: 10922573
    Abstract: Described herein are software and systems for analyzing videos and/or images. Software and systems described herein are configured in different embodiments to carry out different types of analyses. For example, in some embodiments, software and systems described herein are configured to locate an object of interest within a video and/or image.
    Type: Grant
    Filed: October 22, 2018
    Date of Patent: February 16, 2021
    Assignee: FUTURE HEALTH WORKS LTD.
    Inventors: Quoc Huy Phan, Thomas Harte
  • Patent number: 10922788
    Abstract: A method for performing continual learning on a classifier, in a client, capable of classifying images by using a continual learning server is provided. The method includes steps of: a continual learning server (a) inputting first hard images from a first classifier of a client into an Adversarial Autoencoder, to allow an encoder to output latent vectors from the first hard images, allow a decoder to output reconstructed images from the latent vectors, and allow a discriminator and a second classifier to output attribute and classification information to determine second hard images to be stored in a first training data set, and generating augmented images to be stored in a second training data set by adjusting the latent vectors of the reconstructed images determined not as the second hard images; (b) continual learning a third classifier corresponding to the first classifier; and (c) transmitting updated parameters to the client.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: February 16, 2021
    Assignee: Stradvision, Inc.
    Inventors: Dongkyu Yu, Hongmo Je, Bongnam Kang, Wooju Ryu
  • Patent number: 10922804
    Abstract: The present disclosure provides a method and apparatus for evaluating image definition, a computer device and a storage medium, wherein the method comprises: obtaining an image to be processed; inputting the image to be processed to a pre-trained evaluation model; obtaining an comprehensive image definition score outputted by the evaluation model, the comprehensive image definition score being obtained by the evaluation model by obtaining N image definition scores based on N different scales respectively, and then integrating the N image definition scores, N being a positive integer greater than one. The solution of the present invention can be applied to improve the accuracy of the evaluation result.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: February 16, 2021
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Xiang Zhao, Xin Li, Xiao Liu, Xubin Li, Hao Sun, Shilei Wen, Errui Ding
  • Patent number: 10922803
    Abstract: Methods are provided for determining whether an image captured by a capturing device is clean or dirty. These methods include: receiving the image captured by the image capturing device; splitting the received image into a plurality of image portions according to predefined splitting criteria; performing, for each of at least some of the image portions, a Laplacian filter of the image portion to produce a feature vector including Laplacian filter features of the image portion; providing the feature vectors to a first machine learning module that has been trained to produce a clean/dirty indicator depending on corresponding feature vector, the clean/dirty indicator including a probabilistic value of cleanness or dirtiness of corresponding image portion; and determining whether the image is clean or dirty depending on the clean/dirty indicators produced by the first machine learning module. Systems and computer programs suitable for performing such methods are also provided.
    Type: Grant
    Filed: November 21, 2018
    Date of Patent: February 16, 2021
    Assignee: FICOSA ADAS, S.L.U.
    Inventors: Daniel Herchenbach, Anas Mhana, Sebastian Carreno, Robin Lehmann
  • Patent number: 10916039
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: February 9, 2021
    Assignee: Intellective Ai, Inc.
    Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
  • Patent number: 10913455
    Abstract: The disclosure relates to a method for operating a driver assistance system of a motor vehicle. The method includes detecting a first data set of sensor data measured by a sensor device of the driver assistance program. The first data set of sensor data includes missing class allocation information, wherein the class allocation information relates to the objects represented by the sensor data. The method also includes pre-training a classification algorithm of the driver assistance system while taking into consideration the first data set in order to improve the object differentiation of the classification algorithm. The method further includes generating a second data set of simulated sensor data which includes at least one respective piece of class allocation information according to a specific specification.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: February 9, 2021
    Assignee: Audi AG
    Inventors: Erich Bruns, Christian Jarvers
  • Patent number: 10915734
    Abstract: An image captured using a camera on a device (e.g., a mobile device) may be operated on by one or more processes to determine properties of a user's face in the image. A first process may determine one or more first properties of the user's face in the image. A second process operating downstream from the first process may determine at least one second property of the user's face in the image. The second process may use at least one of the first properties from the first process to determine the second property.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: February 9, 2021
    Assignee: Apple Inc.
    Inventors: Atulit Kumar, Joerg A. Liebelt, Onur C. Hamsici, Feng Tang