Patents Examined by Michael J Vanchy, Jr.
  • Patent number: 11957448
    Abstract: A diagnostic tool and methods of using the tool are provided to quantify an amount of nasal collapse in a patient. The diagnostic tool includes a mask with an endoscope port and an opening to allow air flow, an endoscope with a camera adapted to take an image of the nasal valve, and an air flow sensor adapted to measure an inhalation rate of the patient. The diagnostic tool can quantify a size difference between the nasal valve during inhalation and zero flow by calculating a percentage difference in an area or one or more dimensions of the nasal valve during inhalation and zero flow.
    Type: Grant
    Filed: April 4, 2023
    Date of Patent: April 16, 2024
    Assignee: Spirox, Inc.
    Inventors: Scott J. Baron, Michael H. Rosenthal
  • Patent number: 11961329
    Abstract: The disclosure is inputting a first image obtained by capturing an object of authentication moving in a specific direction; inputting a second image at least for one eye obtained by capturing a right eye or a left eye of the object; determining whether the second image is of the left eye or the right eye of the object, based on information including the first image, and outputting a determination result associated with the second image as left/right information; comparing characteristic information relevant to the left/right information, the characteristic information being acquired from a memory that stores the characteristic information of a right eye and a left eye pertaining to object to be authenticated, with characteristic information associated with the left/right information, and calculating a verification score; and authenticating the object captured in the first image and the second image, based on the verification score, and outputting an authentication result.
    Type: Grant
    Filed: May 22, 2023
    Date of Patent: April 16, 2024
    Assignee: NEC CORPORATION
    Inventors: Takashi Shibata, Shoji Yachida, Chisato Funayama, Masato Tsukada, Yuka Ogino, Keiichi Chono, Emi Kitagawa, Yasuhiko Yoshida, Yusuke Mori
  • Patent number: 11941838
    Abstract: The present disclosure provides methods, apparatuses, devices and storage medium for predicting correlation between objects. The method can include: detecting a first object, a second object, and a third object involved in a target image, wherein the first object and the second object represent different body parts, and the third object indicates a body object; determining a joint bounding box surrounding the first object, the second object, and the third object; and predicting correlation between the first object and the second object based on a region corresponding to the joint bounding box in the target image.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: March 26, 2024
    Assignee: SENSETIME INTERNATIONAL PTE. LTD.
    Inventors: Chunya Liu, Xuesen Zhang, Bairun Wang, Jinghuan Chen
  • Patent number: 11922730
    Abstract: The present disclosure provides for methods for signature and identity verification and authentication. The system may comprise a plurality of visual capture devices, virtual data, and a plurality of virtual databases. The plurality of visual capture devices may capture a photo or video of the signee, the signature, a witness, or a combination thereof. The system may comprise a plurality of auxiliary authentication components for recording data such as the date, time, and location of the signature verification, as non-limiting examples. The virtual data may comprise visual data and other metadata. The method may comprise uploading the virtual data collected during the signature verification to a blockchain, where a signature authentication may occur or be recorded. Auxiliary authentication devices may include a plurality of audio capture devices, a plurality of geospatial capture devices, such as accelerometers or GPS, a plurality of pressure sensors, or any combination thereof, as non-limiting examples.
    Type: Grant
    Filed: September 4, 2021
    Date of Patent: March 5, 2024
    Inventors: Simon Levin, Robert Davidoff
  • Patent number: 11906286
    Abstract: The invention discloses a deep learning-based temporal phase unwrapping method for fringe projection profilometry. First, four sets of three-step phase-shifting fringe patterns with different frequencies (including 1, 8, 32, and 64) are projected to the tested objects. The three-step phase-shifting fringe images acquired by the camera are processed to obtain the wrapped phase map using a three-step phase-shifting algorithm. Then, a multi-frequency temporal phase unwrapping (MF-TPU) algorithm is used to unwrap the wrapped phase map to obtain a fringe order map of the high-frequency phase with 64 periods. A residual convolutional neural network is built, and its input data are set to be the wrapped phase maps with frequencies of 1 and 64, and the output data are set to be the fringe order map of the high-frequency phase with 64 periods. Finally, the training dataset and the validation dataset are built to train and validate the network.
    Type: Grant
    Filed: July 5, 2019
    Date of Patent: February 20, 2024
    Assignee: NANJING UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Qian Chen, Chao Zuo, Shijie Feng, Yuzhen Zhang, Guohua Gu
  • Patent number: 11907337
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for realizing a multimodal image classifier. In an aspect, a method includes, for each image of a plurality of images: processing the image by a textual generator model to obtain a set of phrases that are descriptive of the content of the image, wherein each phrase is one or more terms, processing the set of phrases by a textual embedding model to obtain an embedding of predicted text for the image, and processing the image using an image embedding model to obtain an embedding of image pixels of the image. Then a multimodal image classifier is trained on the embeddings of predicted text for the images and the embeddings of image pixels for the images to produce, as output, labels of an output taxonomy to classify an image based on the image as input.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: February 20, 2024
    Assignee: GOOGLE LLC
    Inventors: Ariel Fuxman, Aleksei Timofeev, Zhen Li, Chun-Ta Lu, Manan Shah, Chen Sun, Krishnamurthy Viswanathan, Chao Jia
  • Patent number: 11906436
    Abstract: A method of determining analyte concentration in a body fluid with a mobile device having a camera. A user is prompted to apply body fluid to an optical test strip and then waits a predetermined minimum waiting time. The camera captures an image of part of the test field having the body fluid applied thereto. Analyte concentration is determined based on the image captured. The determination includes estimating a point in time of sample application to the test field by taking into account time-dependent information derived from the image captured using a first color channel of a color space. The determination also estimates the concentration of the analyte by taking into account concentration-dependent information derived from the image using a second color channel of the color space.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: February 20, 2024
    Assignee: Roche Diabetes Care, Inc.
    Inventors: Bernd Limburg, Max Berg, Fredrik Hailer, Volker Tuerck, Daria Skuridina, Irina Ostapenko
  • Patent number: 11900697
    Abstract: The technology relates to approaches for determining appropriate stopping locations at intersections for vehicles operating in a self-driving mode. While many intersections have stop lines painted on the roadway, many others have no such lines. Even if a stop line is present, the physical location may not match what is in store map data, which may be out of date due to construction or line repainting. Aspects of the technology employ a neural network that utilizes input training data and detected sensor data to perform classification, localization and uncertain estimation processes. Based on these processes, the system is able to evaluate distribution information for possible stop locations. The vehicle uses such information to determine an optimal stop point, which may or may not correspond to a stop line in the map data. This information is also used to update the existing map data, which can be shared with other vehicles.
    Type: Grant
    Filed: October 3, 2022
    Date of Patent: February 13, 2024
    Assignee: Waymo LLC
    Inventors: Romain Thibaux, David Harrison Silver, Congrui Hetang
  • Patent number: 11900659
    Abstract: A selecting unit selects first moving image data and second moving image data from a plurality of frame images composing moving image data. A first generating unit generates first training data that is labeled data relating to a specific recognition target from the frame images composing the first moving image data. A learning unit learns a first model recognizing the specific recognition target by using the first training data. A second generating unit generates second training data that is labeled data relating to the specific recognition target from the frame images composing the second moving image data by using the first model.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: February 13, 2024
    Assignee: NEC CORPORATION
    Inventor: Tetsuo Inoshita
  • Patent number: 11893809
    Abstract: A method of re-identifying a rough gemstone comprises providing a 3D model of a first rough gemstone; generating a series of virtual 2D silhouette images of the 3D model; processing each 2D image of the series of virtual 2D silhouette images to obtain a dataset associated with the first rough gemstone; and comparing the dataset of the first rough gemstone with an existing dataset of a rough gemstone. Where the dataset of the first rough gemstone and the existing dataset match each other, the method comprises re-identifying the first rough gemstone as the same rough gemstone from which the existing dataset was obtained.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: February 6, 2024
    Assignee: De Beers UK Ltd
    Inventor: Qi He Hong
  • Patent number: 11875486
    Abstract: Provided in the present disclosure are an image brightness statistical method and an imaging device, related to the image processing field. The method includes: acquiring the bit width of a pixel brightness value of an image to be processed and a maximum acceptable bit width of a block random access memory; dividing bits of each pixel of said image into multiple groups of bits so that the bit width of each group is less than or equal to the maximum acceptable bit width; performing brightness histogram statistics based on the pixel data of same groups to produce a brightness histogram component corresponding to each group; determining brightness evaluation value components of each groups based on the brightness histogram components corresponding to the groups and the number of pixels of said image; and determining a brightness evaluation value of said image based on the brightness evaluation value components.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: January 16, 2024
    Assignee: BEIJING TUSEN ZHITU TECHNOLOGY CO., LTD.
    Inventor: Yu Zhang
  • Patent number: 11861925
    Abstract: Systems and methods are disclosed to receive a training data set comprising a plurality of document images, wherein each document image of the plurality of document images is associated with respective metadata identifying a document field containing a variable text; generate, by processing the plurality of document images, a first heat map represented by a data structure comprising a plurality of heat map elements corresponding to a plurality of document image pixels, wherein each heat map element stores a counter of a number of document images in which the document field contains a document image pixel associated with the heat map element; receive an input document image; and identify, within the input document image, a candidate region comprising the document field, wherein the candidate region comprises a plurality of input document image pixels corresponding to heat map elements satisfying a threshold condition.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: January 2, 2024
    Assignee: ABBYY Development Inc.
    Inventors: Stanislav Semenov, Mikhail Lanin
  • Patent number: 11847806
    Abstract: Scene text information extraction of desired text information from an image can be performed and managed. An information management component (IMC) can determine an anchor word based on analysis of an image. To facilitate determining desired text information in the image, IMC can re-orient the image to zero or substantially zero degrees if it determines that the orientation is skewed. IMC can utilize a neural network to determine and apply bounding boxes to text strings in the image. Using a rules-based approach or machine learning techniques, employing a trained machine learning component, IMC can utilize the anchor word along with inline grouping of textual information in the image, deep text recognition analysis, or bounding box prediction to determine or predict the desired text information in the image. IMC can facilitate presenting the desired text information, anchor word, or other information obtained from the image in an editable format.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: December 19, 2023
    Assignee: DELL PRODUCTS, L.P.
    Inventor: Lee Daniel Saeugling
  • Patent number: 11816829
    Abstract: A novel disparity computation technique is presented which comprises multiple orthogonal disparity maps, generated from approximately orthogonal decomposition feature spaces, collaboratively generating a composite disparity map. Using an approximately orthogonal feature set extracted from such feature spaces produces an approximately orthogonal set of disparity maps that can be composited together to produce a final disparity map. Various methods for dimensioning scenes and are presented. One approach extracts the top and bottom vertices of a cuboid, along with the set of lines, whose intersections define such points. It then defines a unique box from these two intersections as well as the associated lines. Orthographic projection is then attempted, to recenter the box perspective. This is followed by the extraction of the three-dimensional information that is associated with the box, and finally, the dimensions of the box are computed. The same concepts can apply to hallways, rooms, and any other object.
    Type: Grant
    Filed: December 4, 2022
    Date of Patent: November 14, 2023
    Assignee: Golden Edge Holding Corporation
    Inventors: Tarek El Dokor, Jordan Cluster
  • Patent number: 11810401
    Abstract: A method for enhancing user liveness detection is provided that includes receiving image data of a user that includes items of metadata. Moreover, the method includes comparing each item of metadata associated with the received image data against a corresponding item of metadata associated with record image data of the user, and determining whether each item of metadata associated with the received image data matches the corresponding item of metadata. In response to determining at least one item of metadata associated with the received image data does not match the corresponding item of metadata, the method deems the received image data to be genuine and from a live person. In response to determining all items of metadata associated with the received image match the corresponding item of metadata, the method deems the received image data to be fraudulent and not from a living person.
    Type: Grant
    Filed: April 17, 2023
    Date of Patent: November 7, 2023
    Assignee: Daon Technology
    Inventor: Raphael A. Rodriguez
  • Patent number: 11797084
    Abstract: This application discloses a method for training a gaze tracking model, including: obtaining a training sample set; processing the eye sample images in the training sample set by using an initial gaze tracking model to obtain a predicted gaze vector of each eye sample image; determining a model loss according to a cosine distance between the predicted gaze vector and the labeled gaze vector for each eye sample image; and iteratively adjusting one or more reference parameters of the initial gaze tracking model until the model loss meets a convergence condition, to obtain a target gaze tracking model. According to the solution provided in this application, a gaze tracking procedure is simplified, a difference between a predicted value and a labeled value can be better represented by using the cosine distance as a model loss to train a model, to improve prediction accuracy of the gaze tracking model.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: October 24, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Zheng Zhou, Xing Ji, Yitong Wang, Xiaolong Zhu, Min Luo
  • Patent number: 11798332
    Abstract: An information processing apparatus in the present invention includes: an acquisition unit that acquires a first biometrics information group including biometrics information on a first person detected from a first image captured in a first area from a registered biometrics information group including biometrics information on a plurality of registrants; and a matching unit that matches biometrics information on a second person detected from a second image captured in a second area that is different from the first area against biometrics information included in the first biometrics information group.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: October 24, 2023
    Assignee: NEC CORPORATION
    Inventors: Yumi Maeno, Takahiro Nishi, Yutaro Nashimoto
  • Patent number: 11798265
    Abstract: A teaching data correction device sets, for teaching data indicating an object area where an object of interest exists in a training image, a correction candidate area which is an area to be a correction candidate of the object area, the training image being used for learning. The teaching data correction device generates an output machine based on the correction candidate area, the output machine being learned to output, when an image is inputted thereto, an identification result or a regression result relating to the object of the interest. Then, the teaching data correction device updates the teaching data by the correction candidate area based on an accuracy of the output machine, the accuracy being calculated based on the identification result or the regression result outputted by the output machine.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: October 24, 2023
    Assignee: NEC CORPORATION
    Inventors: Hideaki Sato, Soma Shiraishi, Yasunori Babazaki, Jun Piao
  • Patent number: 11790695
    Abstract: Devices, systems, and methods are provided for enhanced video annotations using image analysis. A method may include identifying, by a first device, first faces of first video frames, and second faces of second video frames. The method may include determining a first score for the first video frames, the first score indicative of a first number of faces to label, the first number of faces represented by the first video frames, and determining a second score for the second video frames, the second score indicative of a second number of faces to label. The method may include selecting the first video frames for face labeling, and receiving a first face label for the first face. The method may include generating a second face label for the second faces. The method may include sending the first face label and the second face label to a second device for presentation.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: October 17, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Abhinav Aggarwal, Yash Pandya, Laxmi Shivaji Ahire, Lokesh Amarnath Ravindranathan, Manivel Sethu, Muhammad Raffay Hamid
  • Patent number: 11790655
    Abstract: A video sampling method, including sampling a video based on a sampling window to obtain a current sequence of sampled images; acquiring action parameters corresponding to the current sequence of sampled images; adjusting the sampling window according to the action parameters; and sampling the video based on the adjusted sampling window.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: October 17, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jiahui Yuan, Wei Wen, Jianhua Fan