Abstract: A jump counting method for jump rope is provided. The jump counting method comprises: S1, obtaining an original video data of a jump rope movement, and extracting an audio data and an image data from the original video data; S2, calculating the number of jumps of the rope jumper according to an audio information and an image information extracted from the audio data and the image data; and S3, outputting and displaying the calculation result.
Abstract: The invention refers to the field of processing and analyzing video data received from video surveillance cameras, and more specifically, to technologies aimed at detecting a human in a frame and at analyzing their posture for subsequent detection of potentially dangerous situations by video data. The system for detecting potentially dangerous situations contains video cameras, a memory, a graphical user interface (GUI), and a data processing device.
Type:
Grant
Filed:
July 17, 2020
Date of Patent:
October 25, 2022
Assignee:
OOO ITV Group
Inventors:
Murat Kazievich Altuev, Egor Petrovich Suchkov, Egor Yurevich Lvov
Abstract: A coding efficiency improvement is achieved by performing bit-plane coding in a manner so that coefficient groups for which the set of coded bit-planes is predictively signaled in the datastream, are grouped in two group sets and if a signal is spent in the datastream which signals, for a group set, whether the set of coded bit-planes of all coefficient groups of the respective group set are empty, i.e. all coefficients within the respective group sets are insignificant. In accordance with another aspect, a coding efficiency improvement is achieved by providing bit-plane coding with group-set-wise insignificant signalization according to the first aspect as a coding option alternative relative to the signalization for group sets for which it is signaled that there is no coded prediction residual for the coded bit-planes for the claim groups within the respective group set.
Type:
Grant
Filed:
August 8, 2020
Date of Patent:
September 27, 2022
Assignee:
Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
Inventors:
Joachim Keinert, Thomas Richter, Miguel Ángel Martínez Del Amor, Manuel De Frutos López, Christian Scherl, Herbert Thoma, Siegfried Foessel
Abstract: The present disclosure relates generally improving visibility artifacts associated with encoded signals. A visibility change for local image areas associated with an encoded signal can be determined through use of a plurality of channel-specific contrast sensitivity functions. Of course, other features, and related claims and combinations are provided as well.
Abstract: A vehicle image processing device includes: a plurality of buffers configured to accumulate pieces of image data input individually and sequentially from a plurality of cameras installed in a vehicle so as to associate the pieces of image data with the cameras; a processor configured to select the buffer based on the state information of the vehicle and acquire the piece of image data from the selected buffer so as to perform image processing thereon; a signal line for transferring the pieces of image data in the buffers to the processor; and a transfer controller configured to output the piece of image data in the buffer required from the processor to the signal line.
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing point cloud data representing a sensor measurement of a scene captured by one or more sensors to generate an object detection output that identifies locations of one or more objects in the scene. When deployed within an on-board system of a vehicle, the object detection output that is generated can be used to make autonomous driving decisions for the vehicle with enhanced accuracy.
Type:
Grant
Filed:
July 8, 2020
Date of Patent:
September 20, 2022
Assignee:
Waymo LLC
Inventors:
Jonathon Shlens, Patrick An Phu Nguyen, Benjamin James Caine, Jiquan Ngiam, Wei Han, Brandon Chauloon Yang, Yuning Chai, Pei Sun, Yin Zhou, Xi Yi, Ouais Alsharif, Zhifeng Chen, Vijay Vasudevan
Abstract: A road obstacle detection device which uses a pre-learned first identifier to associate a semantic label with each pixel of an image, uses a pre-learned second identifier to estimate a statistical distribution of a semantic label of a predetermined region of interest of the image from a statistical distribution of a semantic label of a peripheral region that surrounds the region of interest, and uses the statistical distribution of the semantic label associated with the region of interest and the statistical distribution of the semantic label estimated for the region of interest to estimate a likelihood that an object is a road obstacle.
Abstract: According to an embodiment, a calculation device includes a registerer, a first calculator, a receiver, and a second calculator. The registerer registers a plurality of patterns. The first calculator calculates a degree of closeness between the registered patterns. The receiver receives an input pattern. The second calculator calculates a first similarity value between the input pattern and a first registered pattern among the plurality of registered patterns, calculates second similarity values between the input pattern and one or more second registered patterns having a neighbor relationship with the first registered pattern among the plurality of registered patterns, and calculates a combined similarity value in which the first similarity value and the one or more second similarity values are combined.
Abstract: In example implementations, a method is provided. The method may be executed by a processor. The method includes receiving an image. The image is analyzed to obtain facial features. Contextual information is obtained and a vector including a facial feature class of the facial features and contextual feature classes of the contextual information is generated. A facial recognition is then performed based on the vector.
Type:
Grant
Filed:
October 24, 2017
Date of Patent:
September 13, 2022
Assignee:
Hewlett-Packard Development Company, L.P.
Abstract: A product analysis system 800 includes a detection means 810, a classification means 820, and a specification means 830. The detection means 810 detects an area of change in a product shelf from a video of the product shelf. The classification means 820 classifies the change in the product shelf in the detected area of change. The specification means 830 specifies the frequency at which a customer was interested in but did not purchase a product on the basis of the classification of the change.
Abstract: A coding efficiency improvement is achieved by performing bit-plane coding in a manner so that coefficient groups for which the set of coded bit-planes is predictively signaled in the datastream, are grouped in two group sets and if a signal is spent in the datastream which signals, for a group set, whether the set of coded bit-planes of all coefficient groups of the respective group set are empty, i.e. all coefficients within the respective group sets are insignificant. In accordance with another aspect, a coding efficiency improvement is achieved by providing bit-plane coding with group-set-wise insignificant signalization according to the first aspect as a coding option alternative relative to the signalization for group sets for which it is signaled that there is no coded prediction residual for the coded bit-planes for the claim groups within the respective group set.
Type:
Grant
Filed:
August 8, 2020
Date of Patent:
September 6, 2022
Assignee:
Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
Inventors:
Joachim Keinert, Thomas Richter, Miguel Ángel Martínez Del Amor, Manuel De Frutos López, Christian Scherl, Herbert Thoma, Siegfried Foessel
Abstract: An abnormality determination device includes: an analysis unit that analyzes at least a feature amount related to a pattern of a captured image of a manhole cover, the feature amount being included in coded information obtained by coding the captured image; and a determination unit that determines based on an analysis result of the analysis unit whether the manhole cover is abnormal.
Type:
Grant
Filed:
February 19, 2019
Date of Patent:
September 6, 2022
Assignee:
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
Abstract: A lane line recognition method, a lane line recognition device and a non-volatile storage medium. The method includes obtaining a first image which is a road image; dividing the first image from a middle of the first image to determine a first sub-image at a left part of the first image and a second sub-image at a right part of the first image; and performing respectively a recognition operation on the first sub-image and the second sub-image to determine a first lane line in the first sub-image and a second lane line in the second sub-image.
Abstract: An optical flow accelerator (OFA) which provides hardware-based acceleration of optical flow and stereo disparity determination is described. A system is described which includes an OFA configured to determine a first optical flow using a first disparity search technique, and to determine a second optical flow using a second disparity search technique that is different from the first disparity search technique. The system also includes a processor configured to combine the first optical flow and the second optical flow to generate a third optical flow. In some implementations, the first and second disparity search techniques are based upon Semi-Global Matching (SGM). In some implementations, the OFA is further configurable to determine stereo disparity.
Type:
Grant
Filed:
September 3, 2019
Date of Patent:
August 30, 2022
Assignee:
NVIDIA CORPORATION
Inventors:
Dong (Megamus) Zhang, JinYue (Gser) Lu, Zejun (Harry) Hu
Abstract: A computer vision (CV) training system, includes: a supervised learning system to estimate a supervision output from one or more input images according to a target CV application, and to determine a supervised loss according to the supervision output and a ground-truth of the supervision output; an unsupervised learning system to determine an unsupervised loss according to the supervision output and the one or more input images; a weakly supervised learning system to determine a weakly supervised loss according to the supervision output and a weak label corresponding to the one or more input images; and a joint optimizer to concurrently optimize the supervised loss, the unsupervised loss, and the weakly supervised loss.
Type:
Grant
Filed:
May 11, 2020
Date of Patent:
August 30, 2022
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Haoyu Ren, Mostafa El-Khamy, Jungwon Lee, Aman Raj
Abstract: According to one embodiment, an image analysis device includes one or more processors configured to receive input of an image; calculate feature amount information indicating a feature of a region of the image; recognize a known object from the image on the basis of the feature amount information, the known object being registered in learning data of image recognition; recognize a generalization object from the image on the basis of the feature amount information, the generalization object being generalizable from the known object; and output output information on an object identified from the image as the known object or the generalization object.
Abstract: A pet monitoring method and a pet monitoring system according to embodiments of the disclosure are provided. The method is described hereinafter. An image is obtained by a photographic device. At least one of an exercise status detection, an excretion status detection and a danger status detection of a pet is performed according to the image. Pet management information is presented on a management interface of a remote device according to a detection result.
Abstract: An automatic threat recognition (ATR) system is disclosed for scanning an article to recognize contraband items or items of interest contained within the article. The ATR system uses a CAT scanner to obtain a CT image scan of objects within the article, representing a plurality of 2D image slices of the article and its contents. Each 2D image slice includes information forming a plurality of voxels. The ATR system includes a computer and determines which voxels have a likelihood of representing materials of interest. It then aggregates those voxels to produce detected objects. The detected objects are further classified as items of interest vs. not of interest. The ATR system is based on learned parameters for a novel interaction of global and object context mechanisms. ATR system performance may be optimized by using jointly optimal global and object context parameters learned during training. The global context parameters may apply to the article as a whole and facilitate object detection.
Type:
Grant
Filed:
August 14, 2019
Date of Patent:
August 2, 2022
Assignee:
Lawrence Livermore National Security, LLC
Inventors:
David W. Paglieroni, Christian T. Pechard, Harry E. Martz, Jr.
Abstract: A video analysis system includes: a video data acquiring means that acquires video data; a moving object detecting means that detects a moving object from video data acquired by the video data acquiring means, by using a moving object detection parameter, which is a parameter for detecting a moving object; an environment information collecting means that collects environment information representing an external environment of a place where the video data acquiring means is installed; and a parameter changing means that changes the moving object detection parameter used when the moving object detecting means detects a moving object, on the basis of the environment information collected by the environment information collecting means.
Abstract: An image processing device 10 includes: a segmentation unit 21 that segments an input image 200 representing a result of observing an ocean surface region and a land surface region from overhead into ocean surface block images 211 and land surface block images 212, based on a segmentation criterion 210; a first determination unit 22 that determines a binarization criterion 220 for the ocean surface block images 211, based on a scattering model for electromagnetic waves in the ocean surface region; a second determination unit 23 that determines a binarization criterion 230 for the land surface block images 212, based on the binarization criterion 220 and a positional relationship between the ocean surface block images and the land surface block images; and a generation unit 25 that generates a land mask image 250 by performing binarization processing on the input image 200 based on the binarization criterion 220 and the binarization criterion 230.