Patents Examined by Daniel G. Mariam
-
Patent number: 11657525Abstract: An image processing component is trained to process 2D images of human body parts, in order to extract depth information about the human body parts captured therein. Image processing parameters are learned during the training from a training set of captured 3D training images, each 3D training image of a human body part and captured using 3D image capture equipment and comprising 2D image data and corresponding depth data, by: processing the 2D image data of each 3D training image according to the image processing parameters, so as to compute an image processing output for comparison with the corresponding depth data of that 3D image, and adapting the image processing parameters in order to match the image processing outputs to the corresponding depth data, thereby training the image processing component to extract depth information from 2D images of human body parts.Type: GrantFiled: November 30, 2020Date of Patent: May 23, 2023Assignee: Yoti Holding LimitedInventors: Symeon Nikitidis, Francisco Angel Garcia Rodriguez, Erlend Davidson, Samuel Neugber
-
Patent number: 11657630Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to test a pose of a three-dimensional model. A three-dimensional model is stored, the three dimensional model comprising a set of probes. Three-dimensional data of an object is received, the three-dimensional data comprising a set of data entries. The three-dimensional data is converted into a set of fields, comprising generating a first field comprising a first set of values, where each value of the first set of values is indicative of a first characteristic of an associated one or more data entries from the set of data entries, and generating a second field comprising a second set of values, where each second value of the second set of values is indicative of a second characteristic of an associated one or more data entries from the set of data entries, wherein the second characteristic is different than the first characteristic.Type: GrantFiled: December 28, 2020Date of Patent: May 23, 2023Assignee: Cognex CorporationInventors: Andrew Hoelscher, Nathaniel Bogan
-
Patent number: 11657596Abstract: Embodiments of the present invention provide a system that can be used to classify a feedback image in a user review into a semantically meaningful class. During operation, the system analyzes the captions of feedback images in a set of user reviews and determines a set of training labels from the captions. The system then trains an image classifier with the set of training labels and the feedback images. Subsequently, the system generates a signature for a respective feedback image in a new set of user reviews using the image classifier. The signature indicates a likelihood of the image matching a respective label in the set of training labels. Based on the signature, the system can allocate the image to an image cluster.Type: GrantFiled: January 6, 2021Date of Patent: May 23, 2023Assignee: Medallia, Inc.Inventors: Andrew J. Yeager, Ji Fang
-
Patent number: 11651150Abstract: The need for extracting information trapped in unstructured document images is becoming more acute. A major hurdle to this objective is that these images often contain information in the form of tables and extracting data from tabular sub-images presents a unique set of challenges. Embodiments of the present disclosure provide systems and methods that implement a deep learning network for both table detection and structure recognition, wherein interdependence between table detection and table structure recognition are exploited to segment out the table and column regions. This is followed by semantic rule-based row extraction from the identified tabular sub-regions.Type: GrantFiled: March 9, 2020Date of Patent: May 16, 2023Assignee: TATA CONSULTANCY SERVICES LIMITEDInventors: Shubham Singh Paliwal, Vishwanath Doreswamy Gowda, Rohit Rahul, Monika Sharma, Lovekesh Vig
-
Patent number: 11645540Abstract: A method for employing a differentiable ranking based graph sparsification (DRGS) network to use supervision signals from downstream tasks to guide graph sparsification is presented. The method includes, in a training phase, generating node representations by neighborhood aggregation operators, generating sparsified subgraphs by top-k neighbor sampling from a learned neighborhood ranking distribution, feeding the sparsified subgraphs to a task, generating a prediction, and collecting a prediction error to update parameters in the generating and feeding steps to minimize an error, and, in a testing phase, generating node representations by neighborhood aggregation operators related to testing data, generating sparsified subgraphs by top-k neighbor sampling from a learned neighborhood ranking distribution related to the testing data, feeding the sparsified subgraphs related to the testing data to a task, and outputting prediction results to a visualization device.Type: GrantFiled: July 23, 2020Date of Patent: May 9, 2023Assignee: NEC CorporationInventors: Bo Zong, Cheng Zheng, Haifeng Chen
-
Patent number: 11636699Abstract: Embodiments of the present disclosure relate to a method and apparatus for recognizing a table, a device, and a medium. An embodiment of the method can include: detecting a table on a target picture, to obtain a candidate table recognition result; extracting a merging feature of the candidate table recognition result, and determining a to-be-merged row in the candidate table recognition result based on the merging feature; extracting a direction feature of the to-be-merged row, and determining a merging direction of the to-be-merged row based on the direction feature; and adjusting the candidate table recognition result based on the to-be-merged row and the merging direction of the to-be-merged row, to obtain a target table recognition result.Type: GrantFiled: December 10, 2020Date of Patent: April 25, 2023Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTDInventors: Guangyao Han, Minhui Pang, Guobin Xie, Danqing Li, Tianyi Wang, Peiwei Zheng, Zeqing Jiang, Jin Zhang, Hongjiang Du
-
Patent number: 11630549Abstract: Detection of typed and/or pasted text, caret tracking, and active element detection for a computing system are disclosed. The location on the screen associated with a computing system where the user has been typing or pasting text, potentially including hot keys or other keys that do not cause visible characters to appear, can be identified and the physical position on the screen where typing or pasting occurred can be provided based on the current resolution of where one or more characters appeared, where the cursor was blinking, or both. This can be done by identifying locations on the screen where changes occurred and performing text recognition and/or caret detection on these locations. The physical position of the typing or pasting activity allows determination of an active or focused element in an application displayed on the screen.Type: GrantFiled: April 18, 2022Date of Patent: April 18, 2023Assignee: UiPath, Inc.Inventor: Vaclav Skarda
-
Patent number: 11625138Abstract: Detection of typed and/or pasted text, caret tracking, and active element detection for a computing system are disclosed. The location on the screen associated with a computing system where the user has been typing or pasting text, potentially including hot keys or other keys that do not cause visible characters to appear, can be identified and the physical position on the screen where typing or pasting occurred can be provided based on the current resolution of where one or more characters appeared, where the cursor was blinking, or both. This can be done by identifying locations on the screen where changes occurred and performing text recognition and/or caret detection on these locations. The physical position of the typing or pasting activity allows determination of an active or focused element in an application displayed on the screen.Type: GrantFiled: May 20, 2021Date of Patent: April 11, 2023Assignee: UiPath, Inc.Inventor: Vaclav Skarda
-
Patent number: 11625935Abstract: A system for classification of scholastic works includes a computing device configured to receive a first scholastic work, identify an author and a category of the first scholastic work, determine at least a work theme by receiving theme training data, the theme training data including a plurality of entries, each entry including a training textual element and a correlated theme, training a theme classifier as a function of the training data, and determining the at least a work theme as a function of the plurality of textual elements and the theme classifier, calculate a reliability quantifier as a function of the at least a theme, the author, and the category, select the scholastic work as a function of the reliability quantifier, derive, from the scholastic work, at least a correlation between a dietary practice and alleviation of a disease state, and store the at least a correlation in an expert database.Type: GrantFiled: March 9, 2022Date of Patent: April 11, 2023Assignee: KPN INNOVATIONS, LLC.Inventor: Kenneth Neumann
-
Patent number: 11620841Abstract: This disclosure is directed to methods and systems that enable automatic recognition of the meaning, sentiment, and intent of an Internet meme. An Internet meme refers to a digitized image, video, or sound that is a unit of cultural information, carries symbolic meaning representing a particular phenomenon or theme, and is generally known and understood by members of a particular culture. The disclosed methods include automatic identification of a meme template and automatic detection of the sentiment and relationships between entities in the meme. The methods provide the determination of a meme's meaning as intended by its purveyors, as well as recognition of the original sentiment and attitudes conveyed by the use of entities within the meme.Type: GrantFiled: November 2, 2021Date of Patent: April 4, 2023Assignee: ViralMoment Inc.Inventors: Chelsie Morgan Hall, Connyre Hamalainen, Gareth Morinan, Sheyda Demooei
-
Patent number: 11610415Abstract: An apparatus comprises an input interface configured to receive a first 3D point cloud associated with a physical object prior to articulation of an articulatable part, and a second 3D point cloud after articulation of the articulatable part. A processor is operably coupled to the input interface, an output interface, and memory. Program code, when executed by the processor, causes the processor to align the first and second point clouds, find nearest neighbors of points in the first point cloud to points in the second point cloud, eliminate the nearest neighbors of points in the second point cloud such that remaining points in the second point cloud comprise points associated with the articulatable part and points associated with noise, generate an output comprising at least the remaining points of the second point cloud associated with the articulatable part without the noise points, and communicate the output to the output interface.Type: GrantFiled: January 18, 2021Date of Patent: March 21, 2023Assignee: Palo Alto Research Center IncorporatedInventors: Matthew Shreve, Sreenivas Venkobarao
-
Patent number: 11600060Abstract: The present disclosure discloses a nonlinear all-optical deep-learning system and method with multistage space-frequency domain modulation. The system includes an optical input module, configured to convert input information to optical information, a multistage space-frequency domain modulation module, configured to perform multistage space-frequency domain modulation on the optical information generated by the optical input module so as to generate modulated optical information, and an information acquisition module, configured to transform the modulated optical information onto a Fourier plane or an image plane, and to acquire the transformed optical information so as to generate processed optical information.Type: GrantFiled: June 4, 2020Date of Patent: March 7, 2023Assignee: TSINGHUA UNIVERSITYInventors: Qionghai Dai, Tao Yan, Jiamin Wu, Xing Lin
-
Patent number: 11600088Abstract: In some implementations, a device may receive an image that depicts handwritten text. The device may determine that a section of the image includes the handwritten text. The device may analyze, using a first image processing technique, the section to identify subsections of the section that include individual words of the handwritten text. The device may reconfigure, using a second image processing technique, the subsections to create preprocessed word images associated with the individual words. The device may analyze, using a word recognition model, the preprocessed word images to generate digitized words that are associated with the preprocessed word images. The device may verify, based on a reference data structure, that the digitized words correspond to recognized words of the word recognition model. The device may generate, based on verifying the digitized words, digital text according to a sequence of the digitized words in the section.Type: GrantFiled: February 11, 2021Date of Patent: March 7, 2023Assignee: Accenture Global Solutions LimitedInventors: Debasmita Ghosh, Dharmendra Shivnath Prasad Chaurasia, Siddhanth Gupta, Asmita Mahajan, Nidhi, Saurav Mondal
-
Patent number: 11594007Abstract: Detection of typed and/or pasted text, caret tracking, and active element detection for a computing system are disclosed. The location on the screen associated with a computing system where the user has been typing or pasting text, potentially including hot keys or other keys that do not cause visible characters to appear, can be identified and the physical position on the screen where typing or pasting occurred can be provided based on the current resolution of where one or more characters appeared, where the cursor was blinking, or both. This can be done by identifying locations on the screen where changes occurred and performing text recognition and/or caret detection on these locations. The physical position of the typing or pasting activity allows determination of an active or focused element in an application displayed on the screen.Type: GrantFiled: October 13, 2021Date of Patent: February 28, 2023Assignee: UiPath, Inc.Inventor: Vaclav Skarda
-
Patent number: 11589023Abstract: An image processing apparatus acquires first shape information representing a three-dimensional shape about an object located within an image capturing region based on one or more images obtained by one or more imaging apparatuses for performing image capturing of the image capturing region from a plurality of directions, acquires second shape information representing a three-dimensional shape about an object located within the image capturing region based on one or more images obtained by one or more imaging apparatuses, acquires viewpoint information indicating a position and direction of a viewpoint, and generates a virtual viewpoint image corresponding to the position and direction of the viewpoint indicated by the acquired viewpoint information based on the acquired first shape information and the acquired second shape information, such that at least a part of the object corresponding to the second shape information is displayed in a translucent way within the virtual viewpoint image.Type: GrantFiled: October 14, 2020Date of Patent: February 21, 2023Assignee: Canon Kabushiki KaishaInventor: Kaori Taya
-
Patent number: 11587305Abstract: A computer-implemented method of learning sensory media association includes receiving a first type of nontext input and a second type of nontext input; encoding and decoding the first type of nontext input using a first autoencoder having a first convolutional neural network, and the second type of nontext input using a second autoencoder having a second convolutional neural network; bridging first autoencoder representations and second autoencoder representations by a deep neural network that learns mappings between the first autoencoder representations associated with a first modality and the second autoencoder representations associated with a second modality; and based on the encoding, decoding, and the bridging, generating a first type of nontext output and a second type of nontext output based on the first type of nontext input or the second type of nontext input in either the first modality or the second modality.Type: GrantFiled: March 14, 2019Date of Patent: February 21, 2023Assignee: FUJIFILM Business Innovation Corp.Inventors: Qiong Liu, Ray Yuan, Hao Hu, Yanxia Zhang, Yin-Ying Chen, Francine Chen
-
Patent number: 11579514Abstract: A system and method for controlling characteristics of collected image data are disclosed. The system and method include performing pre-processing of an image using GPUs, configuring an optic based on the pre-processing, the configuring being designed to account for features of the pre-processed image, acquiring an image using the configured optic, processing the acquired image using GPUs, and determining if the processed acquired image accounts for feature of the pre-processed image, and the determination is affirmative, outputting the image, wherein if the determination is negative repeating the configuring of the optic and re-acquiring the image.Type: GrantFiled: December 23, 2020Date of Patent: February 14, 2023Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Allen H. Rush, Hui Zhou
-
Patent number: 11576736Abstract: A Robotic control system has a wand, which emits multiple narrow beams of light, which fall on a light sensor array, or with a camera, a surface, defining the wand's changing position and attitude which a computer uses to direct relative motion of robotic tools or remote processes, such as those that are controlled by a mouse, but in three dimensions and motion compensation means and means for reducing latency.Type: GrantFiled: June 26, 2020Date of Patent: February 14, 2023Assignee: TITAN MEDICAL INC.Inventor: John D. Unsworth
-
Patent number: 11580763Abstract: In some aspects, a method includes performing optical character recognition (OCR) based on data corresponding to a document to generate text data, detecting one or more bounded regions from the data based on a predetermined boundary rule set, and matching one or more portions of the text data to the one or more bounded regions to generate matched text data. Each bounded region of the one or more bounded regions encloses a corresponding block of text. The method also includes extracting features from the matched text data to generate a plurality of feature vectors and providing the plurality of feature vectors to a trained machine-learning classifier to generate one or more labels associated with the one or more bounded regions. The method further includes outputting metadata indicating a hierarchical layout associated with the document based on the one or more labels and the matched text data.Type: GrantFiled: May 18, 2020Date of Patent: February 14, 2023Assignee: Thomson Reuters Enterprise Centre GmbHInventors: Khaled Ammar, Brian Zubert, Sakif Hossain Khan
-
Patent number: 11574489Abstract: According to the present disclosure, a handwriting image and a background image are combined, thereby generating a combined image, a correct answer label image is generated based on the handwriting image, and the generated combined image and the generated correct answer label image are used as learning data for training a neural network.Type: GrantFiled: August 19, 2020Date of Patent: February 7, 2023Assignee: Canon Kabushiki KaishaInventor: Yusuke Muramatsu