Patents Examined by Michael J Vanchy, Jr.
  • Patent number: 11995907
    Abstract: Methods and distributed computer devices for automatically determining whether a document is genuine. The method involves generating an image of the document, pre-processing of the image to obtain at least one segment of the image with an area of interest and dividing the at least one segment into portions containing single characters and/or combinations of characters. A validation of at least two single characters and/or at least two combinations of characters is performed for each of the single character and/or character combinations for at least two different categories. Score values are created for each category for each validated single character and/or character combination. Feature vectors are created for each single character and/or character combination, with the respective score values for each category as components. The method involves classifying the feature vectors to determine whether the single character or character combination to which the feature vector is associated is genuine.
    Type: Grant
    Filed: January 5, 2022
    Date of Patent: May 28, 2024
    Assignee: Amadeus S.A. S.
    Inventors: Swagat Parida, Renjith K. Sasidharan
  • Patent number: 11989855
    Abstract: A system and method is provided for generating a tone mapping function to reduce dynamic range of a first image to produce a second image. A luma signal and a plurality of chroma components are determined and a gamut color correction is then performed. An adaptive function is generated by comparing the luma signal and at least one chroma component of the first and second image.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: May 21, 2024
    Assignee: INTERDIGITAL VC HOLDINGS, INC.
    Inventors: Francois Cellier, Yannick Olivier, Marie-Jean Colaitis, David Touze
  • Patent number: 11972626
    Abstract: System and method for document image detection, comprising: producing, using a neural network, a superpixel segmentation map of an input image; generating a superpixel binary mask by associating each superpixel of the superpixel segmentation map with a class of a predetermined set of classes; identifying one or more connected components in the superpixel binary mask; for each connected component of the superpixel binary mask, identifying a corresponding minimum bounding polygon; creating one or more image dividing lines based on the minimum bounding polygons; and defining boundaries of one or more objects of interest based on at least a subset of the image dividing lines.
    Type: Grant
    Filed: December 24, 2020
    Date of Patent: April 30, 2024
    Assignee: ABBYY Development Inc.
    Inventors: Ivan Zagaynov, Aleksandra Stepina
  • Patent number: 11957448
    Abstract: A diagnostic tool and methods of using the tool are provided to quantify an amount of nasal collapse in a patient. The diagnostic tool includes a mask with an endoscope port and an opening to allow air flow, an endoscope with a camera adapted to take an image of the nasal valve, and an air flow sensor adapted to measure an inhalation rate of the patient. The diagnostic tool can quantify a size difference between the nasal valve during inhalation and zero flow by calculating a percentage difference in an area or one or more dimensions of the nasal valve during inhalation and zero flow.
    Type: Grant
    Filed: April 4, 2023
    Date of Patent: April 16, 2024
    Assignee: Spirox, Inc.
    Inventors: Scott J. Baron, Michael H. Rosenthal
  • Patent number: 11961329
    Abstract: The disclosure is inputting a first image obtained by capturing an object of authentication moving in a specific direction; inputting a second image at least for one eye obtained by capturing a right eye or a left eye of the object; determining whether the second image is of the left eye or the right eye of the object, based on information including the first image, and outputting a determination result associated with the second image as left/right information; comparing characteristic information relevant to the left/right information, the characteristic information being acquired from a memory that stores the characteristic information of a right eye and a left eye pertaining to object to be authenticated, with characteristic information associated with the left/right information, and calculating a verification score; and authenticating the object captured in the first image and the second image, based on the verification score, and outputting an authentication result.
    Type: Grant
    Filed: May 22, 2023
    Date of Patent: April 16, 2024
    Assignee: NEC CORPORATION
    Inventors: Takashi Shibata, Shoji Yachida, Chisato Funayama, Masato Tsukada, Yuka Ogino, Keiichi Chono, Emi Kitagawa, Yasuhiko Yoshida, Yusuke Mori
  • Patent number: 11941838
    Abstract: The present disclosure provides methods, apparatuses, devices and storage medium for predicting correlation between objects. The method can include: detecting a first object, a second object, and a third object involved in a target image, wherein the first object and the second object represent different body parts, and the third object indicates a body object; determining a joint bounding box surrounding the first object, the second object, and the third object; and predicting correlation between the first object and the second object based on a region corresponding to the joint bounding box in the target image.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: March 26, 2024
    Assignee: SENSETIME INTERNATIONAL PTE. LTD.
    Inventors: Chunya Liu, Xuesen Zhang, Bairun Wang, Jinghuan Chen
  • Patent number: 11922730
    Abstract: The present disclosure provides for methods for signature and identity verification and authentication. The system may comprise a plurality of visual capture devices, virtual data, and a plurality of virtual databases. The plurality of visual capture devices may capture a photo or video of the signee, the signature, a witness, or a combination thereof. The system may comprise a plurality of auxiliary authentication components for recording data such as the date, time, and location of the signature verification, as non-limiting examples. The virtual data may comprise visual data and other metadata. The method may comprise uploading the virtual data collected during the signature verification to a blockchain, where a signature authentication may occur or be recorded. Auxiliary authentication devices may include a plurality of audio capture devices, a plurality of geospatial capture devices, such as accelerometers or GPS, a plurality of pressure sensors, or any combination thereof, as non-limiting examples.
    Type: Grant
    Filed: September 4, 2021
    Date of Patent: March 5, 2024
    Inventors: Simon Levin, Robert Davidoff
  • Patent number: 11906286
    Abstract: The invention discloses a deep learning-based temporal phase unwrapping method for fringe projection profilometry. First, four sets of three-step phase-shifting fringe patterns with different frequencies (including 1, 8, 32, and 64) are projected to the tested objects. The three-step phase-shifting fringe images acquired by the camera are processed to obtain the wrapped phase map using a three-step phase-shifting algorithm. Then, a multi-frequency temporal phase unwrapping (MF-TPU) algorithm is used to unwrap the wrapped phase map to obtain a fringe order map of the high-frequency phase with 64 periods. A residual convolutional neural network is built, and its input data are set to be the wrapped phase maps with frequencies of 1 and 64, and the output data are set to be the fringe order map of the high-frequency phase with 64 periods. Finally, the training dataset and the validation dataset are built to train and validate the network.
    Type: Grant
    Filed: July 5, 2019
    Date of Patent: February 20, 2024
    Assignee: NANJING UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Qian Chen, Chao Zuo, Shijie Feng, Yuzhen Zhang, Guohua Gu
  • Patent number: 11907337
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for realizing a multimodal image classifier. In an aspect, a method includes, for each image of a plurality of images: processing the image by a textual generator model to obtain a set of phrases that are descriptive of the content of the image, wherein each phrase is one or more terms, processing the set of phrases by a textual embedding model to obtain an embedding of predicted text for the image, and processing the image using an image embedding model to obtain an embedding of image pixels of the image. Then a multimodal image classifier is trained on the embeddings of predicted text for the images and the embeddings of image pixels for the images to produce, as output, labels of an output taxonomy to classify an image based on the image as input.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: February 20, 2024
    Assignee: GOOGLE LLC
    Inventors: Ariel Fuxman, Aleksei Timofeev, Zhen Li, Chun-Ta Lu, Manan Shah, Chen Sun, Krishnamurthy Viswanathan, Chao Jia
  • Patent number: 11906436
    Abstract: A method of determining analyte concentration in a body fluid with a mobile device having a camera. A user is prompted to apply body fluid to an optical test strip and then waits a predetermined minimum waiting time. The camera captures an image of part of the test field having the body fluid applied thereto. Analyte concentration is determined based on the image captured. The determination includes estimating a point in time of sample application to the test field by taking into account time-dependent information derived from the image captured using a first color channel of a color space. The determination also estimates the concentration of the analyte by taking into account concentration-dependent information derived from the image using a second color channel of the color space.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: February 20, 2024
    Assignee: Roche Diabetes Care, Inc.
    Inventors: Bernd Limburg, Max Berg, Fredrik Hailer, Volker Tuerck, Daria Skuridina, Irina Ostapenko
  • Patent number: 11900697
    Abstract: The technology relates to approaches for determining appropriate stopping locations at intersections for vehicles operating in a self-driving mode. While many intersections have stop lines painted on the roadway, many others have no such lines. Even if a stop line is present, the physical location may not match what is in store map data, which may be out of date due to construction or line repainting. Aspects of the technology employ a neural network that utilizes input training data and detected sensor data to perform classification, localization and uncertain estimation processes. Based on these processes, the system is able to evaluate distribution information for possible stop locations. The vehicle uses such information to determine an optimal stop point, which may or may not correspond to a stop line in the map data. This information is also used to update the existing map data, which can be shared with other vehicles.
    Type: Grant
    Filed: October 3, 2022
    Date of Patent: February 13, 2024
    Assignee: Waymo LLC
    Inventors: Romain Thibaux, David Harrison Silver, Congrui Hetang
  • Patent number: 11900659
    Abstract: A selecting unit selects first moving image data and second moving image data from a plurality of frame images composing moving image data. A first generating unit generates first training data that is labeled data relating to a specific recognition target from the frame images composing the first moving image data. A learning unit learns a first model recognizing the specific recognition target by using the first training data. A second generating unit generates second training data that is labeled data relating to the specific recognition target from the frame images composing the second moving image data by using the first model.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: February 13, 2024
    Assignee: NEC CORPORATION
    Inventor: Tetsuo Inoshita
  • Patent number: 11893809
    Abstract: A method of re-identifying a rough gemstone comprises providing a 3D model of a first rough gemstone; generating a series of virtual 2D silhouette images of the 3D model; processing each 2D image of the series of virtual 2D silhouette images to obtain a dataset associated with the first rough gemstone; and comparing the dataset of the first rough gemstone with an existing dataset of a rough gemstone. Where the dataset of the first rough gemstone and the existing dataset match each other, the method comprises re-identifying the first rough gemstone as the same rough gemstone from which the existing dataset was obtained.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: February 6, 2024
    Assignee: De Beers UK Ltd
    Inventor: Qi He Hong
  • Patent number: 11875486
    Abstract: Provided in the present disclosure are an image brightness statistical method and an imaging device, related to the image processing field. The method includes: acquiring the bit width of a pixel brightness value of an image to be processed and a maximum acceptable bit width of a block random access memory; dividing bits of each pixel of said image into multiple groups of bits so that the bit width of each group is less than or equal to the maximum acceptable bit width; performing brightness histogram statistics based on the pixel data of same groups to produce a brightness histogram component corresponding to each group; determining brightness evaluation value components of each groups based on the brightness histogram components corresponding to the groups and the number of pixels of said image; and determining a brightness evaluation value of said image based on the brightness evaluation value components.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: January 16, 2024
    Assignee: BEIJING TUSEN ZHITU TECHNOLOGY CO., LTD.
    Inventor: Yu Zhang
  • Patent number: 11861925
    Abstract: Systems and methods are disclosed to receive a training data set comprising a plurality of document images, wherein each document image of the plurality of document images is associated with respective metadata identifying a document field containing a variable text; generate, by processing the plurality of document images, a first heat map represented by a data structure comprising a plurality of heat map elements corresponding to a plurality of document image pixels, wherein each heat map element stores a counter of a number of document images in which the document field contains a document image pixel associated with the heat map element; receive an input document image; and identify, within the input document image, a candidate region comprising the document field, wherein the candidate region comprises a plurality of input document image pixels corresponding to heat map elements satisfying a threshold condition.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: January 2, 2024
    Assignee: ABBYY Development Inc.
    Inventors: Stanislav Semenov, Mikhail Lanin
  • Patent number: 11847806
    Abstract: Scene text information extraction of desired text information from an image can be performed and managed. An information management component (IMC) can determine an anchor word based on analysis of an image. To facilitate determining desired text information in the image, IMC can re-orient the image to zero or substantially zero degrees if it determines that the orientation is skewed. IMC can utilize a neural network to determine and apply bounding boxes to text strings in the image. Using a rules-based approach or machine learning techniques, employing a trained machine learning component, IMC can utilize the anchor word along with inline grouping of textual information in the image, deep text recognition analysis, or bounding box prediction to determine or predict the desired text information in the image. IMC can facilitate presenting the desired text information, anchor word, or other information obtained from the image in an editable format.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: December 19, 2023
    Assignee: DELL PRODUCTS, L.P.
    Inventor: Lee Daniel Saeugling
  • Patent number: 11816829
    Abstract: A novel disparity computation technique is presented which comprises multiple orthogonal disparity maps, generated from approximately orthogonal decomposition feature spaces, collaboratively generating a composite disparity map. Using an approximately orthogonal feature set extracted from such feature spaces produces an approximately orthogonal set of disparity maps that can be composited together to produce a final disparity map. Various methods for dimensioning scenes and are presented. One approach extracts the top and bottom vertices of a cuboid, along with the set of lines, whose intersections define such points. It then defines a unique box from these two intersections as well as the associated lines. Orthographic projection is then attempted, to recenter the box perspective. This is followed by the extraction of the three-dimensional information that is associated with the box, and finally, the dimensions of the box are computed. The same concepts can apply to hallways, rooms, and any other object.
    Type: Grant
    Filed: December 4, 2022
    Date of Patent: November 14, 2023
    Assignee: Golden Edge Holding Corporation
    Inventors: Tarek El Dokor, Jordan Cluster
  • Patent number: 11810401
    Abstract: A method for enhancing user liveness detection is provided that includes receiving image data of a user that includes items of metadata. Moreover, the method includes comparing each item of metadata associated with the received image data against a corresponding item of metadata associated with record image data of the user, and determining whether each item of metadata associated with the received image data matches the corresponding item of metadata. In response to determining at least one item of metadata associated with the received image data does not match the corresponding item of metadata, the method deems the received image data to be genuine and from a live person. In response to determining all items of metadata associated with the received image match the corresponding item of metadata, the method deems the received image data to be fraudulent and not from a living person.
    Type: Grant
    Filed: April 17, 2023
    Date of Patent: November 7, 2023
    Assignee: Daon Technology
    Inventor: Raphael A. Rodriguez
  • Patent number: 11797084
    Abstract: This application discloses a method for training a gaze tracking model, including: obtaining a training sample set; processing the eye sample images in the training sample set by using an initial gaze tracking model to obtain a predicted gaze vector of each eye sample image; determining a model loss according to a cosine distance between the predicted gaze vector and the labeled gaze vector for each eye sample image; and iteratively adjusting one or more reference parameters of the initial gaze tracking model until the model loss meets a convergence condition, to obtain a target gaze tracking model. According to the solution provided in this application, a gaze tracking procedure is simplified, a difference between a predicted value and a labeled value can be better represented by using the cosine distance as a model loss to train a model, to improve prediction accuracy of the gaze tracking model.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: October 24, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Zheng Zhou, Xing Ji, Yitong Wang, Xiaolong Zhu, Min Luo
  • Patent number: 11798332
    Abstract: An information processing apparatus in the present invention includes: an acquisition unit that acquires a first biometrics information group including biometrics information on a first person detected from a first image captured in a first area from a registered biometrics information group including biometrics information on a plurality of registrants; and a matching unit that matches biometrics information on a second person detected from a second image captured in a second area that is different from the first area against biometrics information included in the first biometrics information group.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: October 24, 2023
    Assignee: NEC CORPORATION
    Inventors: Yumi Maeno, Takahiro Nishi, Yutaro Nashimoto