Patents Examined by Kevin Ky
  • Patent number: 11968344
    Abstract: A watermark image may be generated that includes a first set of encoded pixels each of which is assigned a first transparency value and a second set of encoded pixels each of which is assigned a second transparency value, the second transparency level being different from the first transparency level. The encoded pixels may be distributed among a set of blank pixels such that each encoded pixel neighbors one or more blank pixels in the watermark image, and in particular at least two blank pixels in the watermark image. Herein, each blank pixel may be assigned the second transparency value. The watermark image may be overlaid and blended over a background source image to create an encoded source image. A decoder system may recover encoded information from the encoded source image.
    Type: Grant
    Filed: April 22, 2022
    Date of Patent: April 23, 2024
    Assignee: Google LLC
    Inventors: Abdullah Hassan Gharaibeh, Michal Dabrowski, Ryan Matthew Haggarty, Igor Foox-Rapoport, Wan Wang, Duncan Geoffrey Hector Wood, Dany Kuminov, Matthew Young-Lai, Bhavin Vyas, George Jacob Levitte, Jean Semere
  • Patent number: 11960846
    Abstract: Systems and methods are presented for inferring an embedding vector of an item of a first type into the embedding space. Upon receiving a first time for which there is no embedding vector, documents of a document corpus that include (co-occurrence) both the received item and other items of the same type are identified. Of those other items that have embedding vectors, those embedding vectors are retrieved and averaged. The resulting averaged embedding vector is established as an inferred embedding vector for the received item.
    Type: Grant
    Filed: May 10, 2023
    Date of Patent: April 16, 2024
    Assignee: Pinterest, Inc.
    Inventors: Heath Vinicombe, Chenyi Li, Yunsong Guo, Yu Liu
  • Patent number: 11954173
    Abstract: A method, an electronic device, and a computer program product for processing data is disclosed. The method includes training a classification model based on a plurality of reference documents describing different objects, the trained classification model respectively associating the plurality of reference documents with the described objects. The method further includes determining from the individual words identification information that can identify the objects based on contributions of individual words in the reference documents to the association. Identification information that can identify objects in documents describing the objects may be determined, so that an identification information data set is automatically generated for training a machine learning model that is used to determine the identification information.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: April 9, 2024
    Assignee: EMC IP Holding Company LLC
    Inventors: Yuting Zhang, Kaikai Jia
  • Patent number: 11935238
    Abstract: There is provided a method of generating a dataset of synthetic images, comprising: for each real image each depicting a real human anatomical structure: extracting and preserving a real anatomical structure region(s) from the real image, generating a synthetic image comprising a synthetic human anatomical structure region and the preserved real anatomical structure region(s), designating pairs of images, each including the real image and the synthetic image, feeding the pair into a machine learning model trained to recognize anatomical structure parts to obtain an outcome of a similarity value denoting an amount of similarity between the real image and the synthetic image, verifying that the synthetic image does not depict the real human anatomical structure when the similarity value is below a threshold, wherein an identity of the real human anatomical structure is non-determinable from the synthetic image, and including the verified synthetic image in the dataset.
    Type: Grant
    Filed: April 17, 2023
    Date of Patent: March 19, 2024
    Assignee: RealizeMD Ltd.
    Inventors: Stanislav Khirman, Uri Neeman, Alon Gat, David P. Rapaport
  • Patent number: 11922121
    Abstract: The present disclosure provides a method and an apparatus for information extraction, an electronic device, and a storage medium. The method for information extraction includes: first obtaining text data, and then inputting the text data into an information extraction model obtained through pre-training to obtain triple information contained in the text data, wherein the triple information includes a subject, a predicate and an object in the text data. The information extraction model includes a binary classification sub-model and a multi-label classification sub-model, wherein the binary classification sub-model is configured to extract the subject in the text data, and the multi-label classification sub-model is configured to extract the predicate and the object corresponding to the subject in the text data according to the subject and the text data.
    Type: Grant
    Filed: January 13, 2021
    Date of Patent: March 5, 2024
    Assignee: BOE Technology Group Co., Ltd.
    Inventor: Bingqian Wang
  • Patent number: 11915343
    Abstract: Systems and methods for color representation are described. Embodiments of the inventive concept are configured to receive an attribute-object pair including a first term comprising an attribute label and a second term comprising an object label, encode the attribute-object pair to produce encoded features using a neural network that orders the first term and the second term based on the attribute label and the object label, and generate a color profile for the attribute-object pair based on the encoded features, wherein the color profile is based on a compositional relationship between the first term and the second term.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: February 27, 2024
    Assignee: ADOBE INC.
    Inventors: Paridhi Maheshwari, Vishwa Vinay, Dhananjay Raut, Nihal Jain, Praneetha Vaddamanu, Shraiysh Vaishay
  • Patent number: 11908219
    Abstract: The disclosure provides a method and a device for processing information, an electronic device, and a storage medium, belonging to a field of artificial intelligence including computer vision, deep learning, and natural language processing. In the method, the computing device recognizes multiple text items in the image. The computing device classifies multiple text items into a first set of name text items and a second set of content text items based on semantics of the text items. The computing device performs a matching operation between the first set and the second set based on a layout of the text items in the image, and determines matched name-content text items. The matched name-content text items include a name text item in the first set and a content text item matching the name text item and in the second set. The computing device outputs the matched name-content text items.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: February 20, 2024
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Zihan Ni, Yipeng Sun, Kun Yao, Junyu Han, Errui Ding, Jingtuo Liu, Haifeng Wang
  • Patent number: 11893813
    Abstract: An electronic device and a control method therefor are provided. The present electronic device comprises: a communication interface including a circuit, a memory for storing at least one instruction, and a processor for executing the at least one instruction, wherein the processor acquires contents through the communication interface, acquires information about a text included in an image of the contents, and acquires, on the basis of the information about the text included in the image of the contents, caption data of the contents by performing voice recognition for voice data included in the contents.
    Type: Grant
    Filed: January 21, 2020
    Date of Patent: February 6, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jeongho Mok, Heejun Song, Sanghyuk Yoon
  • Patent number: 11893659
    Abstract: The present invention relates to a method and system that allows input mammography images to be converted between domains. More particularly, the present invention relates to converting mammography images from the image style common to one manufacturer of imaging equipment to the image style common to another manufacturer of imaging equipment. Aspects and/or embodiments seek to provide a method of converting input images from the format output by one imaging device into the format normally output by another imaging device. The imaging devices may differ in their manufacturer, model or configuration such that they produce different styles of image, even if presented with the same raw input data, due to the image processing used in the imaging device(s).
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: February 6, 2024
    Assignee: Kheiron Medical Technologies Ltd.
    Inventors: Tobias Rijken, Michael O'Neill, Andreas Heindl, Joseph Yearsley, Dimitrios Korkinof, Galvin Khara
  • Patent number: 11893542
    Abstract: A data management server computer (“server”) and related methods are disclosed to enhance electronic documents or search processes related to skill data, including skill names, using machine learning techniques. The server is programmed to organize skill names from a dictionary into a knowledge graph. The knowledge graph includes nodes that represent the skill names and words of the skill names and edges that represent syntactic or semantic relationships. The server is programmed to respond to skill-related requests using the knowledge graph and track access patterns of the knowledge graph in responding to the requests. The replies to the request can be generated based on the structure or access patterns of the knowledge graph and used as search results or skill-related data, such as resumes or job descriptions.
    Type: Grant
    Filed: October 23, 2021
    Date of Patent: February 6, 2024
    Assignee: SkyHive Technologies Holdings Inc.
    Inventors: Sean Hinton, Mohan Reddy, Sergey Bukharov, Yuri Yerastov
  • Patent number: 11893592
    Abstract: A method and system for incentivized neural network training and assurance processes provides incentives to object miners to identify objects in video streams for the purposes of enhancing the training of computer-implemented neural networks on the identified objects and/or augmenting the results of automatic object identification by trained neural networks. An object mining user interface and process is provided to object miners that provides incentives for identifying objects in video streams and technical capabilities for designating identified objects within multiple multi-dimensional regions of pixels. Incentives may be token-based and in accordance with end user interactions within a visual user interface with representations of the miner-identified objects within a video stream.
    Type: Grant
    Filed: January 20, 2023
    Date of Patent: February 6, 2024
    Assignee: Revealit Corporation
    Inventors: Garry Anthony Smith, Zachary Oakes, Steven Dennis Flinn
  • Patent number: 11893514
    Abstract: A contextual-based method and system for identifying and revealing objects from video directs a focus of attention to images or sequences of images responsive to a command, interrogative, or inferred preference of a user. A probability is assigned to an object that is inferred to be represented in the images or sequence of images. The probability is generated by application of a computer-implemented neural network. The probability is then updated based upon a context within which the representation of the object is inferred to be situated. The context may be inferred from one or more inferences related to one or more archetypical objects that are associated with the context. In accordance with the updated probability, a communication may be delivered that references attributes of the object and/or the user may direct a command to the representation of the object.
    Type: Grant
    Filed: July 5, 2022
    Date of Patent: February 6, 2024
    Assignee: Revealit Corporation
    Inventors: Garry Anthony Smith, Naomi Felina Moneypenny, Zachary Oakes, Steven Dennis Flinn, Michael George Renie
  • Patent number: 11887217
    Abstract: Digital image text editing techniques as implemented by an image processing system are described that support increased user interaction in the creation and editing of digital images through understanding a content creator's intent as expressed using text. In one example, a text user input is received by a text input module. The text user input describes a visual object and a visual attribute, in which the visual object specifies a visual context of the visual attribute. A feature representation generated by a text-to-feature system using a machine-learning module based on the text user input. The feature representation is passed to an image editing system to edit a digital object in a digital image, e.g., by applying a texture to an outline of the digital object within the digital image.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Paridhi Maheshwari, Vishwa Vinay, Shraiysh Vaishay, Praneetha Vaddamanu, Nihal Jain, Dhananjay Bhausaheb Raut
  • Patent number: 11880662
    Abstract: In some examples, matrix based bot implementation may include obtaining, for a plurality of bots that are used to respond to a query, a matrix that includes entries including a plurality of scenarios, a plurality of questions corresponding to the plurality of scenarios, and a plurality of responses. Each response may correspond to a specified question. A plurality of scripts may be generated based on an analysis of the matrix. Each script may include at least one question followed by at least one response, and further followed by at least one scenario. For each script, a closest pre-existing script may be identified based on a comparison of the script to pre-existing scripts. For each script, a modification to the matrix may be generated based on a difference in the script from the closest pre-existing script. The bots may be utilized to respond to the query based on the modified matrix.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: January 23, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Jokko Korhonen
  • Patent number: 11880986
    Abstract: A framework for gantry alignment of a multimodality medical scanner. First image data of a non-radioactive structure is acquired by using intrinsic radiation emitted by scintillator crystals of detectors in a first gantry of the multimodality medical scanner. Second image data of the non-radioactive structure is acquired using a second gantry for another modality of the multimodality medical scanner. Image reconstruction may be performed based on the first and second image data of the non-radioactive structure to generate first and second reconstructed image volumes. A gantry alignment transformation that aligns the first and second reconstructed image volumes may then be determined.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: January 23, 2024
    Assignee: Siemens Medical Solutions USA, Inc.
    Inventors: Paul Schleyer, Deepak Bharkhada, Harold E. Rothfuss, Mohammadreza Teimoorisichani, Dieter Ritter
  • Patent number: 11861829
    Abstract: The present disclosure provides a deep learning based medical image detection method and apparatus, a computer-readable medium, and an electronic device. The method includes: acquiring a to-be-detected medical image comprising a plurality of slices; for each slice in the to-be-detected medical image: extracting N basic feature maps of the slice by a deep neural network, N being an integer greater than 1, merging features of the N basic feature maps by the deep neural network, to obtain M enhanced feature maps, M being an integer greater than 1, and respectively performing a hierarchically dilated convolutions operation on the M enhanced feature maps by the deep neural network, to generate a superposed feature map of each enhanced feature map; and predicting position information of a region of interest and a confidence score thereof in the to-be-detected medical image by the deep neural network based on the superposed feature map.
    Type: Grant
    Filed: May 12, 2021
    Date of Patent: January 2, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Lijun Gong
  • Patent number: 11853703
    Abstract: Disclosed are systems and methods for receiving a plurality of comments at a particular phase of a transaction with a member of a networked system, classifying one or more of the plurality of comments into one of a set of predetermined sentiment classifications, applying a trained machine learning system to select a category from a set of predefined categories for each of the one or more comments, applying a natural language processing module to generate a sub-category for each of the one or more comments, associating the generated sub-categories with their respective categories for the one or more comments, and generating a display of the determined categories for the particular transaction with the generated sub-categories, each generated sub-category being graphically connected to their respective categories.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: December 26, 2023
    Assignee: EBAY INC.
    Inventors: Don Kumudu Janaka Ranatunga, Marie Michelle Rhea Foster, Brandon An Lai, Sanjika Hewavitharana, Jason Diran, Canran Xu
  • Patent number: 11853704
    Abstract: Embodiments of this application disclose a classification model training method, a classification method, a device, and a medium. An initial classification model is first trained by using a first sample set including a large quantity of first samples, to obtain a pre-trained model, each first sample including a social text and an emoticon label corresponding to the social text; and the pre-trained model is then trained by using a second sample set including a small quantity of second samples, to obtain a social text sentiment classification model that uses a social text as an input and use a sentiment class probability distribution corresponding to the social text as an output. In this method, the model is trained by combining a large quantity of weakly supervised samples with a small quantity of supervised samples, to ensure that the model obtained through training has better model performance without increasing manually labeled samples.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: December 26, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Haisong Zhang, Yan Song
  • Patent number: 11842163
    Abstract: Disclosed are a method and apparatus for generating prediction information, and an electronic device and a medium. One embodiment of the method comprises: acquiring at least one input word; generating a word vector of each input word of the at least one input word to obtain a word vector set, wherein the at least one input word is obtained by performing word segmentation on target input text; generating an input text vector on the basis of the word vector set; and on the basis of the input text vector and a user vector, generating prediction information for predicting a user intention, wherein the user vector is obtained on the basis of user historical record information. In this embodiment, prediction information for predicting a user intention is generated, such that the popping up of unnecessary information is reduced. A user can be prevented from being disturbed, thereby improving the user experience.
    Type: Grant
    Filed: December 9, 2022
    Date of Patent: December 12, 2023
    Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
    Inventors: Xiaowei Hu, Cheng Yang, Changhu Wang
  • Patent number: 11836830
    Abstract: A decoder is configured to decode a plurality of texels from a received block of texture data encoded according to the Adaptive Scalable Texture Compression (ASTC) format, and includes a parameter decode unit configured to decode configuration data for the received block of texture data, a colour decode unit configured to decode colour endpoint data for the plurality of texels of the received block in dependence on the configuration data, a weight decode unit configured to decode interpolation weight data for each of the plurality of texels of the received block in dependence on the configuration data, and at least one interpolator unit configured to calculate a colour value for each of the plurality of texels of the received block using the interpolation weight data for that texel and a pair of colour endpoints from the colour endpoint data.
    Type: Grant
    Filed: January 25, 2023
    Date of Patent: December 5, 2023
    Assignee: Imagination Technologies Limited
    Inventors: Kenneth Rovers, Yoong Chert Foo