Abstract: Provided is processing of allowing an information processor to output characteristic information on at least either sports equipment or a method for using the sports equipment by a user, the information processor executing: image acquisition processing acquiring an image of the sports equipment after use, of the sports equipment with a mark attached, or of an installed tool installed, during use of the sports equipment, on the sports equipment; position identification processing identifying a position likely to be damaged of the sports equipment by performing detection of a color change place, a shape change place, or the mark on the image acquired in the image acquisition processing; and information outputting processing outputting the characteristic information based on the position identified in the position identification processing.
Abstract: An information processing apparatus (2000) includes a detection unit (2020), a state estimation unit (2040), and a height estimation unit (2080). The detection unit (2020) detects a target person from a video frame. The state estimation unit (2040) estimates a state of the detected target person. The height estimation unit (2080) estimates a height of the person on the basis of a height of the target person in the video frame in a case where the estimated state satisfies a predetermined condition.
Abstract: In an embodiment, a pose estimation device obtains an image of an object, and generates a pose estimate of the object. The pose estimate includes a respective heatmap for each of a plurality of pose components of a pose of the object, and the respective heatmap for each of the pose components includes a respective uncertainty indication of an uncertainty of the pose component at each of one or more pixels of the image.
Type:
Grant
Filed:
March 18, 2020
Date of Patent:
April 19, 2022
Assignee:
Toyota Research Institute, Inc.
Inventors:
Kunimatsu Hashimoto, Duy-Nguyen Ta Huynh, Eric A. Cousineau, Russell L. Tedrake
Abstract: An intelligent video surveillance system is disclosed which performs real-time analytics on a live video stream. The system includes a training database populated with frames of actual video of objects of interest taken in a relevant environment. A subset of the frames include bounding boxes and/or bounding polygons which can be augmented. The training database also includes classification/annotation of data/labels relevant to the object of interest, a person carrying the object of interest, and/or the background or environment. The training database is searchable by the classification/annotation of data/labels.
Type:
Grant
Filed:
May 18, 2020
Date of Patent:
April 19, 2022
Assignee:
ZeroEyes, Inc.
Inventors:
Timothy Sulzer, Michael Lahiff, Marcus Day
Abstract: An apparatus that can obtain a subtraction image efficiently is provided. An image processing apparatus obtains a target image constituted by a set of voxels arranged in a discretized manner; sets a search area in the target image; and obtains, in a partial area included in the search area, on the basis of at least one of a voxel value included in the partial area and an interpolated value obtained by interpolation of a voxel of the target image, at least one of a maximum and a minimum of a voxel value and an interpolated value within the search area.
Abstract: A method for correcting nonlinearities of image data of at least one radiograph and a computed tomography device are provided. The method includes obtaining image data of the at least one radiograph by irradiating an object with polychromatic invasive radiation and by detecting attenuated radiation that has passed through the object, utilizing a plurality of correction functions for correction purposes, said correction functions each being determined by the parameter value of at least one correction parameter, and applying an ascertainment method to ascertain the parameter value or the parameter values of the correction function used for correction purposes, said ascertainment method being determined by the parameter value of an ascertainment parameter or the parameter value sets of a plurality of ascertainment parameters.
Type:
Grant
Filed:
June 29, 2019
Date of Patent:
April 12, 2022
Assignee:
Carl Zeiss Industrielle Messtechnik GmbH
Abstract: A noise map is used for defect detection. One or more measurements of intensities at one or more pixels are received and an intensity statistic is determined for each measurement. The intensity statistics are grouped into at least one region and stored with at least one alignment target. A wafer can be inspected with a wafer inspection tool using the noise map. The noise map can be used as a segmentation mask to suppress noise.
Abstract: A matching apparatus for characterising the human face in order to facilitate the search for people with similar faces. The apparatus uses 3D modelling of a variety of image sources including video to characterise a subject's face using a set of parameters. These parameters are then used to identify other people or image sources which have a set of parameters which are similar to the subject's. Feedback from the users is used to improve future matching.
Abstract: A method includes determining a detection output that represents an object in a two-dimensional image using a detection model, wherein the detection output includes a shape definition that describes a shape and size of the object; defining a three-dimensional representation based on the shape definition, wherein the three-dimensional representation includes a three-dimensional model that represents the object that is placed in three-dimensional space according to a position and a rotation; determining a three-dimensional detection loss that describes a difference between the three-dimensional representation and three-dimensional sensor information; and updating the detection model based on the three-dimensional detection loss.
Abstract: A vehicle monitoring system includes at least one camera, and a server that is communicably connected to the camera and a client terminal. The camera captures a license plate and an occupant's face of a vehicle entering an angle of view of the camera, and transmits a captured video to the server. The server acquires an analysis result of the license plate, analysis results of a type and a color of the vehicle, an analysis result of the occupant's face, and an analysis result of the number of occupants based on the captured video, and accumulates the acquired analysis results as analysis results of the captured video, and sends the analysis result to the client terminal in correlation with a snapshot of the captured video.
Abstract: In an example embodiment, techniques are provided for 3D object detection by detecting objects in 2D (as 2D bounding boxes) in a set of calibrated 2D images of a scene, matching the 2D bounding boxes that correspond to the same object and reconstructing objects in 3D (represented as 3D bounding boxes) from the corresponding, matched 2D bounding boxes. The techniques may leverage the advances in 2D object detection to address the unresolved issue of 3D object detection. If sparse 3D points for the scene are available (e.g., as a byproduct of SfM photogrammetry reconstruction) they may be used to refine the 3D bounding boxes (e.g., to reduce their size).
Abstract: An example virtual reality (VR) system includes a VR headset, a plurality of monitoring stations, and an image processing device. Each of the monitoring stations includes an image sensor to capture images of the VR headset and an environment in which the VR headset is used. The image processing device is to process the images captured by the monitoring stations to detect an object in the environment, and to transmit, to the VR headset, information that includes notification of the object detected.
Type:
Grant
Filed:
April 21, 2017
Date of Patent:
March 8, 2022
Assignee:
Hewlett-Packard Development Company, L.P.
Inventors:
Alexander Wayne Clark, Brandon James Lee Haist, Henry Wang
Abstract: The luminance atmosphere that the creator intends is excellently reproduced on the receiving end. The transmission video data is obtained by applying a predetermined opto-electrical transfer function to the input video data. The transmission video data is transmitted together with the luminance conversion acceptable range information about a set region in the screen. For example, a transmitting unit transmits a video stream obtained by encoding the transmission video data while inserting the luminance conversion acceptable range information into a layer of the video stream. The receiving end obtains display video data by applying an electro-optical transfer function corresponding to the predetermined opto-electrical transfer function to the transmission video data, and performing a luminance conversion process in each of the set regions independently in accordance with the luminance conversion acceptable range information.
Abstract: An image processing device includes a marker detector configured to detect markers including white lines extending in two directions on a road surface based on an image signal from an imager that takes an image of the road surface around a vehicle, a parking frame detector configured to compute adjacent markers on the road surface among the detected markers, and to detect a parking frame defined by the adjacent markers based on a distance between the adjacent markers, and a shape estimator configured to detect extending directions of the white lines of the markers that are included in the detected parking frame, and to estimate a shape of the parking frame based on the extending directions of the detected white lines.
Abstract: A method of detecting and responding to a visitor to a smart home environment via an electronic greeting system of the smart home environment, including determining that a visitor is approaching an entryway of the smart home environment; initiating a facial recognition operation while the visitor is approaching the entryway; initiating an observation window in response to the determination that a visitor is approaching the entryway; obtaining context information from one or more sensors of the smart home environment during the observation window; and at the end of the time window, initiating a response to the detected approach of the visitor based on the context information and/or an outcome of the facial recognition operation.
Type:
Grant
Filed:
May 26, 2020
Date of Patent:
February 22, 2022
Assignee:
Google LLC
Inventors:
Jason Evans Goulden, Rengarajan Aravamudhan, Hae Rim Jeong, Michael Dixon, James Edward Stewart, Sayed Yusef Shafi, Sahana Mysore, Seungho Yang, Yu-An Lien, Christopher Charles Burns, Rajeev Nongpiur, Jeffrey Boyd
Abstract: This relates to systems and processes for using a virtual assistant to control electronic devices. In one example process, a user can speak an input in natural language form to a user device to control one or more electronic devices. The user device can transmit the user speech to a server to be converted into a textual representation. The server can identify the one or more electronic devices and appropriate commands to be performed by the one or more electronic devices based on the textual representation. The identified one or more devices and commands to be performed can be transmitted back to the user device, which can forward the commands to the appropriate one or more electronic devices for execution. In response to receiving the commands, the one or more electronic devices can perform the commands and transmit their current states to the user device.
Type:
Grant
Filed:
May 22, 2020
Date of Patent:
February 22, 2022
Assignee:
Apple Inc.
Inventors:
Ryan M. Orr, Garett R. Nell, Benjamin L. Brumbaugh
Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed. An example apparatus includes an index map generator to generate a first index map based on a first virtual environment generated by a first mobile device, the first mobile device associated with a first user, a collision detector to determine a collision likelihood based on the first index map, and an object placer to, in response to the collision likelihood satisfying a threshold, modify the first virtual environment.
Abstract: Embodiments of the present invention provide a translation method and apparatus, and relate to the field of machine translation. The method includes: obtaining a to-be-translated sentence, where the to-be-translated sentence is a sentence expressed in a first language; determining a first named entity set in the to-be-translated sentence, and an entity type of each first named entity in the first named entity set; determining, based on the first named entity set and the entity type of each first named entity, a second named entity set expressed in a second language; determining a source semantic template of the to-be-translated sentence, and obtaining a target semantic template corresponding to the source semantic template from a semantic template correspondence; and determining a target translation sentence based on the second named entity set and the target semantic template.
Abstract: Facial recognition system that compares narrow band ultraviolet-absorbing skin chromophores to identify a subject. Ultraviolet images show much greater facial detail than visible light images, so matching of ultraviolet images may be much more accurate. Embodiments of the system may have a camera that is sensitive to ultraviolet, and a special lens and filter that pass the relevant ultraviolet wavelengths. A database of known persons may contain reference ultraviolet facial images tagged with each person's identity. Reference images and subject images may be processed to locate the face, identify features (such as chromophores), compare and match feature descriptors, and calculate correlation scores between the subject image and each reference image. If the subject is moving, the subject's face may be tracked, a 3D model of the subject's face may be developed from multiple images, and this model may be rotated so that the orientation matches that of the reference images.
Type:
Grant
Filed:
May 17, 2021
Date of Patent:
January 25, 2022
Assignee:
VR Media Technology, Inc.
Inventors:
Charles Gregory Passmore, Sabine Bredow
Abstract: A posture estimation system includes: a marker that is mounted on an object; a camera that captures an image of the object; an analysis section that analyzes a posture of the object on a basis of a position of the marker included in a captured image captured by the camera; an inertial measurement section that is provided on the object to detect motion of the object; a calculation section that calculates calculated posture information indicating the posture of the object on a basis of detection information indicating a detection result from the inertial measurement section; an estimation section that estimates the posture of the object on a basis of the calculated posture information and analyzed posture information indicating the posture of the object analyzed by the analysis section; and a correction section that corrects an error included in the calculated posture information on a basis of the analyzed posture information.