Patents Examined by Timothy Choi
  • Patent number: 11494920
    Abstract: A method for detecting adverse conditions associated with a device includes receiving, at one or more processing devices at one or more locations, one or more image frames; receiving a set of signals representing outputs of one or more sensors of a device; estimating, based on the one or more image frames, a first set of one or more motion values; estimating, based on the set of signals, a second set of one or more motion values; determining that a degree of correlation between (i) a first motion represented by the first set of one or more motion values and (ii) a second motion represented by the second set of one or more motion values fails to satisfy a threshold condition; and in response to determining that the degree of correlation fails to satisfy the threshold condition, determining presence of an adverse condition associated with the device.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: November 8, 2022
    Assignee: Jumio Corporation
    Inventors: Reza R. Derakhshani, Vikas Gottemukkula, Yash Joshi, Sashi Kanth Saripalle, Tetyana Anisimova
  • Patent number: 11488312
    Abstract: A system that provides tracking during growth stages, harvesting stages, processing stages, packaging stages and shipping stages of a grown product lifecycle. Various sets of sensors are associated with grown product and/or purchaser order. Sensors enable a purchaser to access live or real time video displays of a group of product associated with the purchaser.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: November 1, 2022
    Assignee: Starline Global Trade, Inc.
    Inventor: Matthew O'Neill
  • Patent number: 11475603
    Abstract: An apparatus and method for three-dimensional (3D) geometric data compression, includes storage of a first 3D geometric mesh of a first data size, which includes a 3D representation of a plurality of objects in a 3D space. The apparatus includes circuitry that receives motion tracking data of the plurality of objects from a plurality of position trackers. The motion tracking data includes motion information of each of the plurality of objects from a first position to a second position in the 3D space. The 3D geometric mesh is segmented into a plurality of 3D geometric meshes corresponding to the plurality of objects, based on the motion tracking data. As a result of the segmentation of the 3D geometric mesh before encoding and the use of motion tracking data, the plurality of 3D geometric meshes are efficiently encoded.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: October 18, 2022
    Assignee: SONY CORPORATION
    Inventor: Danillo Graziosi
  • Patent number: 11423651
    Abstract: Described is a system and method for accurate image and/or video scene classification. More specifically, described is a system that makes use of a specialized convolutional-neural network (hereafter CNN) based technique for the fusion of bottom-up whole-image features and top-down entity classification. When the two parallel and independent processing paths are fused, the system provides an accurate classification of the scene as depicted in the image or video.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: August 23, 2022
    Assignee: HRL LABORATORIES, LLC
    Inventors: Ryan M. Uhlenbrock, Deepak Khosla, Yang Chen, Fredy Monterroza
  • Patent number: 11423671
    Abstract: Apparatuses, systems and methods are provided for detecting vehicle occupant actions. More particularly, apparatuses, systems and methods are provided for detecting vehicle occupant actions based on digital image data.
    Type: Grant
    Filed: June 14, 2016
    Date of Patent: August 23, 2022
    Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY
    Inventors: Aaron Scott Chan, Kenneth J. Sanchez
  • Patent number: 11417108
    Abstract: The present invention is a two-wheel vehicle riding person number determination system including an imaging means configured to image a two-wheel vehicle that is installed in a predetermined position and travels on a road, and a two-wheel vehicle riding person number determining means configured to process an image of the imaging means, extract a contour shape of an upper position of the two-wheel vehicle that travels on the road, detect humped shapes corresponding to heads of persons who ride on the two-wheel vehicle from the contour shape of the upper position of the two-wheel vehicle, and determine, on the basis of the humped shapes, whether or not the number of the persons who ride on the two-wheel vehicle is at least two persons or more.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: August 16, 2022
    Assignee: NEC CORPORATION
    Inventors: Hiroyoshi Miyano, Tetsuo Inoshita
  • Patent number: 11417095
    Abstract: An image recognition method is provided. The method includes obtaining predicted locations of joints of a target person in a to-be-recognized image based on a joint prediction model, where the joint prediction model is pre-constructed by: obtaining a plurality of sample images; inputting training features of the sample images and a body model feature to a neural network and obtaining predicted locations of joints in the sample images outputted by the neural network; updating a body extraction parameter and an alignment parameter; and inputting the training features of the sample images and the body model feature to the neural network to obtain the joint prediction model.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: August 16, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiaolong Zhu, Kaining Huang, Jingmin Luo, Lijian Mei, Shenghui Huang, Yongsen Zheng, Yitong Wang, Haozhi Huang
  • Patent number: 11398031
    Abstract: A weighting for a roadmap method is automatically determined. A first or a second weighting image is generated from an anatomical image and an object image. For this purpose, a prespecified first weighting value is assigned to pixels belonging to a prespecified anatomical feature or to an instrument. Other pixels are assigned increasingly small weighting values at increasing distances from the anatomical feature or from the instrument toward an edge of a respective recording region according to a prespecified monotonously decreasing function in dependence upon the location. An overall weighting image is generated by combining the first and the second weighting images with one another and/or a region of interest determined using the overall weighting image are then provided as input data for an image processing algorithm.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: July 26, 2022
    Assignee: Siemens Healthcare GmbH
    Inventor: Yiannis Kyriakou
  • Patent number: 11373361
    Abstract: The present invention relates to image processing for enhancing ultrasound images. In order to provide image data showing the current situation, for example in a region of interest of a patient, an image processing device (10) for enhancing ultrasound images is provided that comprises an image data input unit (12), a central processing unit (14), and a display unit (16). The image data input unit is configured to provide an ultrasound image of a region of interest of an object, and to provide an X-ray image of the region of interest of the object. The central processing unit is configured to select a predetermined image area in the X-ray image, to register the ultrasound image and the X-ray image, to detect the predetermined area in the ultrasound image based on the registered selected predetermined image area, and to highlight at least a part of the detected area in the ultrasound image to generate a boosted ultrasound image.
    Type: Grant
    Filed: October 30, 2013
    Date of Patent: June 28, 2022
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Olivier Pierre Nempont, Pascal Yves Francois Cathier, Raoul Florent
  • Patent number: 11367282
    Abstract: A subtitle extraction method includes decoding a video to obtain video frames; performing adjacency operation in a subtitle arrangement direction on pixels in the video frames to obtain adjacency regions in the video frames; and determining certain video frames including a same subtitle based on the adjacency regions, and subtitle regions in the certain video frames including the same subtitle based on distribution positions of the adjacency regions in the video frames including the same subtitle. The method also includes constructing a component tree for at least two channels of the subtitle regions and using the constructed component tree to extract a contrasting extremal region corresponding to each channel; performing color enhancement processing on the contrasting extremal regions of the at least two channels to form a color-enhanced contrasting extremal region; and extracting the subtitle by merging the color-enhanced contrasting extremal regions of at least two channels.
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: June 21, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Xingxing Wang
  • Patent number: 11321868
    Abstract: A system for estimating a pose of one or more persons in a scene includes a camera configured to capture an image of the scene; and a data processor configured to execute computer executable instructions for: (i) receiving the image of the scene from the camera; (ii) extracting features from the image of the scene for providing inputs to a keypoint subnet and a person detection subnet; (iii) generating one or more keypoints using the keypoint subnet; (iv) generating one or more person instances using the person detection subnet; (v) assigning the one or more keypoints to the one or more person instances by learning pose structures from the image data; and (vi) determining one or more poses of the one or more persons in the scene using the assignment of the one or more keypoints to the one or more person instances.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: May 3, 2022
    Assignee: Bertec Corporation
    Inventors: Emre Akbas, Muhammed Kocabas, Muhammed Salih Karagoz, Necip Berme
  • Patent number: 11275935
    Abstract: Disclosed are various embodiments for patent analysis applications. A computing device may be directed to parse an electronic version of a patent document having a detailed description, a claims section, and at least one drawing. In various embodiments, parsing the electronic version of the patent document may include applying an OCR process to the electronic document, obtaining a list of claim terms used in the claims section, identifying instances of the claim terms used in the detailed description, identifying a reference numeral corresponding to the claim terms from the detailed description, and identifying portions of the drawing that includes the reference numeral. In response to user interaction with a claim term, a dialog may be shown proximate to the claim term, where the dialog includes a portion of the detailed description that includes the claim term and/or the drawing that comprises the reference numeral corresponding to the claim term.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: March 15, 2022
    Inventor: Michael J. Schuster
  • Patent number: 11256958
    Abstract: A method that includes obtaining real training samples that include real images that depict real objects, obtaining simulated training samples that include simulated images that depict simulated objects, defining a training dataset that includes at least some of the real training samples and at least some of the simulated training samples, and training a machine learning model to detect subject objects in unannotated input images using the training dataset.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: February 22, 2022
    Assignee: Apple Inc.
    Inventors: Melanie S. Subbiah, Jamie R. Lesser, Nicholas E. Apostoloff
  • Patent number: 11250534
    Abstract: Advanced signal processing technology including steganographic embedding and digital watermarking is described. For an encoded image, detectability measures can be generated including a first detectability measure associated with a synchronization component strength and a second detectability measure associated with a message component strength. Such measures can be used to help determine a likelihood that the encoded image, once printed on a physical substrate, will be detectable from optical scan data representing such. Of course, other features and combinations are described as well.
    Type: Grant
    Filed: November 2, 2018
    Date of Patent: February 15, 2022
    Assignee: Digimarc Corporation
    Inventor: Vojtech Holub
  • Patent number: 11232591
    Abstract: A system generates a user hand shape model from a single depth camera. The system includes the single depth camera and a hand tracking unit. The single depth camera generates single depth image data of a user's hand. The hand tracking unit applies the single depth image data to a neural network model to generate heat maps indicating locations of hand features. The locations of hand features are used to generate a user hand shape model customized to the size and shape of the user's hand. The user hand shape model is defined by a set of principal component hand shapes defining a hand shape variation space. The limited number of principal component hand shape models reduces determination of user hand shape to a smaller number of variables, and thus provides for a fast calibration of the user hand shape model.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: January 25, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Christopher David Twigg, Robert Y. Wang, Yuting Ye
  • Patent number: 11222241
    Abstract: An image conversion unit includes a selector and a plurality of image converters. Each image converter is formed from an estimator of machine learning type, and estimates, based on an image acquired under a first observation condition and as a reference image, an image which is presumed to be acquired under a second observation condition. When a particular reference image is selected from among a plurality of reference images displayed on a display, a second observation condition corresponding to the selected reference image is set in an observation mechanism as a next observation condition.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: January 11, 2022
    Assignee: JEOL Ltd.
    Inventor: Fuminori Uematsu
  • Patent number: 11163982
    Abstract: A face verifying method and apparatus. The face verifying method includes detecting a face region from an input image, generating a synthesized face image by combining image information of the face region and reference image information, based on a determined masking region, extracting one or more face features from the face region and the synthesized face image, performing a verification operation with respect to the one or more face features and predetermined registration information, and indicating whether verification of the input image is successful based on a result of the performed verification operation.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: November 2, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seungju Han, Minsu Ko, Deoksang Kim, Chang Kyu Choi, Jaejoon Han
  • Patent number: 11157731
    Abstract: Methods and systems are provided for defining and determining a formal and verifiable mobile document image quality and usability (MDIQU) standard, or Standard for short. The Standard ensures that a mobile image can be used in an appropriate mobile document processing application, for example an application for mobile check deposit. In order to quantify the usability, the Standard establishes 5 quality and usability grades. A mobile image capture device can capture images. A mobile device can receive information associated with one or more image quality assurance (IQA) criteria; evaluating the images to select an image satisfying an image quality criteria based on the received information; and in response to the image satisfying the image quality score, sending the selected image to determine a set of image quality assurance (IQA) scores.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: October 26, 2021
    Assignee: MITEK SYSTEMS, INC.
    Inventors: Grigori Nepomniachtchi, Michael Strange
  • Patent number: 11151448
    Abstract: A method, computer system, and a computer program product for generating a location tag for a piece of visual data using deep learning is provided. The present invention may include receiving the piece of visual data. The present invention may also include analyzing the received piece of visual data using a neural network. The present invention may then include retrieving a location for the analyzed piece of visual data from the neural network. The present invention may further include generating a plurality of metadata for the retrieved location associated with the analyzed piece of visual data, wherein the generated plurality of metadata includes the location tag.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: October 19, 2021
    Assignee: International Business Machines Corporation
    Inventors: Rhonda L. Childress, Justin D. Eyster, Avery K. Rowe, Priyanka Sarkar, Christopher E. Whitridge
  • Patent number: 11138442
    Abstract: Embodiments of a method and system described herein enable capture of video data streams from multiple, different video data source devices and the processing of the video data streams. The video data streams are merged such that various data protocols can all be processed with the same worker processors on different types of operating systems, which are typically distributed. In an embodiment the multiple video data sources comprises at least one mobile device executing a video sensing application that produces a video data stream for processing by video analysis worker processes. The processes include automatically detecting moving objects in a video data stream, and further tracking and analyzing the moving objects.
    Type: Grant
    Filed: October 7, 2016
    Date of Patent: October 5, 2021
    Assignee: Placemeter, Inc.
    Inventors: Alexandre Winter, Ugo Jardonnet, Tuan Hue Thi, Niklaus Haldimann