Abstract: The described positional awareness techniques employing visual-inertial sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to provide positional awareness to machines with improved speed and accuracy.
Abstract: A method for identifying a state of a food package by using a camera can include capturing image data depicting a section of the food package by using the camera. The section includes at least one package feature, The method can further include identifying a package feature sub-set of the image data depicting the at least one package feature provided in the section, and determining the state of the food package based on the package feature sub-set of the image data. The state is selected from a food holding state and an emptied state.
Type:
Grant
Filed:
March 19, 2021
Date of Patent:
February 13, 2024
Assignee:
Tetra Laval Holdings & Finance S.A.
Inventors:
Sara Egidi, Gabriele Molari, Federico Campo
Abstract: A computer-implemented method of generating metadata from an image may comprise sending the image to an object detection service, which generates detections metadata from the image. The image may also be sent to a visual features extractor, which extracts visual features metadata from the image. The generated detections metadata may then be sent to an uncertainty score calculator, which computes an uncertainty score from the detections metadata. The uncertainty score may be related to a level of uncertainty within the detections metadata. The image, the visual features metadata, the detections metadata and the uncertainty score may then be stored in a database accessible over a computer network.
Type:
Grant
Filed:
June 4, 2021
Date of Patent:
February 6, 2024
Assignee:
VADE USA, INCORPORATED
Inventors:
Mehdi Regina, Maxime Marc Meyer, Sébastien Goutal
Abstract: A method, computer program, and computer system is provided for processing point cloud data. Quantized point cloud data including a plurality of voxels is received. An occupancy map is generated for the quantized point cloud corresponding to lost voxels during quantization from among the plurality of voxels. A point cloud is reconstructed from the quantized point cloud data based on populating the lost voxels.
Type:
Grant
Filed:
June 11, 2021
Date of Patent:
February 6, 2024
Assignee:
TENCENT AMERICA LLC
Inventors:
Anique Akhtar, Wen Gao, Xiang Zhang, Shan Liu
Abstract: Systems and methods provide for optimizing video review using motion recap images. A video review system can identify background image data of a video clip including an amount of motion satisfying a motion threshold. The video review system can generate foreground mask data segmenting foreground image data, representing a moving object in the video clip, from the background image data. The video review system can select a set of instances of the moving object represented in the foreground image data. The video review system can generate a motion recap image by superimposing the set of instances of the moving object represented in the foreground image data onto the background data.
Abstract: A processing device receives image data of an actual condition of a patient's dental arch at a stage of an orthodontic treatment plan. The processing device compares the image data of the actual condition of the patient's dental arch to a planned condition of the patient's dental arch for the stage of the orthodontic treatment plan. The processing device determines an updated orthodontic treatment plan based on one or more differences between the actual condition of the patient's dental arch and the planned condition of the patient's dental arch determined from the comparing. The processing device generates one or more three-dimensional (3D) models for one or more new orthodontic aligners in accordance with the updated orthodontic treatment plan.
Type:
Grant
Filed:
May 31, 2022
Date of Patent:
January 16, 2024
Assignee:
Align Technology, Inc.
Inventors:
Avi Kopelman, Rene M. Sterental, Igor Kvasov, Mikhail Dyachenko, Roman A. Roschin, Leonid Vyacheslavovich Grechishnikov
Abstract: Provided are a detection method, an electronic device, and a computer-readable storage medium. The detection method includes: receiving an image of a product to be detected, and detecting a position of a preset feature point on a cover plate based on the image, where the product to be detected includes a connection piece and a cover plate, the connection piece is located on the cover plate and covers a partial region of the cover plate, and the preset feature point is located in a non-edge region on the cover plate that is not covered by the connection piece; obtaining a position of the cover plate in the image based on the position of the preset feature point and a size of the cover plate; and obtaining a position of the connection piece in the image based on the image.
Abstract: Particle or hadron therapy is used on abnormal tissue using carbon atoms, protons, or helium atoms run through a linear accelerator and then directed at the target in the body. This can be used to treat, for example, atrial fibrillation, ventricular tachycardia, hypertension, seizures, gastrointestinal maladies, etc. Contouring and gating may be used to account for cardiac and respiratory motion, helping reduce collateral damage.
Type:
Grant
Filed:
August 31, 2018
Date of Patent:
January 2, 2024
Assignee:
Mayo Foundation for Medical Education and Research
Inventors:
Douglas L. Packer, Samuel J. Asirvatham, Suraj Kapa
Abstract: Various embodiments described herein provide for detection of a particular object within a scene depicted by image data by using a coarse-to-fine approach/strategy based on one or more relationships of objects depicted within the scene.
Abstract: In various aspects, there is provided a system and method for determining a mediated reality positioning offset for a virtual camera pose to display geospatial object data. The method comprising: determining or receiving positioning parameters associated with a neutral position; subsequent to determining or receiving the positioning parameters associated with the neutral position, receiving positioning data representing a subsequent physical position; determining updated positioning parameters associated with the subsequent physical position; determining a updated offset comprising determining a geometric difference between the positioning parameters associated with a neutral position and the updated positioning parameters associated with the subsequent physical position; and outputting the updated offset.
Abstract: A panoptic labeling system includes a modified panoptic labeling neural network (“modified PLNN”) that is trained to generate labels for pixels in an input image. The panoptic labeling system generates modified training images by combining training images with mask instances from annotated images. The modified PLNN determines a set of labels representing categories of objects depicted in the modified training images. The modified PLNN also determines a subset of the labels representing categories of objects depicted in the input image. For each mask pixel in a modified training image, the modified PLNN calculates a probability indicating whether the mask pixel has the same label as an object pixel. The modified PLNN generates a mask label for each mask pixel, based on the probability. The panoptic labeling system provides the mask label to, for example, a digital graphics editing system that uses the labels to complete an infill operation.
Abstract: Systems, devices, and methods for identifying a region of interest are provided. A plurality of skeletal landmarks may be identified from an image received from an imaging device. A pose of a patient may be determined based on the plurality of skeletal landmarks. A region of interest may be identified on the patient based on the determined pose. Instructions may be automatically provided to the controller to adjust a pose of a surgical instrument relative to the region of interest. The plurality of skeletal landmarks may be tracked for movement. The region of interest may be updated when movement of the plurality of skeletal landmarks is detected.
Abstract: An eye state detection-based image processing method and apparatus, a device, and a storage medium are provided. The method comprises: detecting eye states of a target face in an image set to be processed to obtain target area images in which the eye states meet a preset condition, then determining therefrom a target effect image corresponding to the target face, and finally synthesizing the target effect image onto a reference image in the image set to be processed to obtain a target image corresponding to the image set to be processed.
Abstract: The described positional awareness techniques employing visual-inertial sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to provide positional awareness to machines with improved speed and accuracy.
Abstract: An activity recognition system is disclosed. A plurality of temporal features is generated from a digital representation of an observed activity using a feature detection algorithm. An observed activity graph comprising one or more clusters of temporal features generated from the digital representation is established, wherein each one of the one or more clusters of temporal features defines a node of the observed activity graph. At least one contextually relevant scoring technique is selected from similarity scoring techniques for known activity graphs, the at least one contextually relevant scoring technique being associated with activity ingestion metadata that satisfies device context criteria defined based on device contextual attributes of the digital representation, and a similarity activity score is calculated for the observed activity graph as a function of the at least one contextually relevant scoring technique, the similarity activity score being relative to at least one known activity graph.
Abstract: Embodiments of the disclosure provide systems and methods for performing motion transfer using a learning model. An exemplary system may include a communication interface configured to receive a first image including a first movable object and a second image including a second movable object. The system may also include at least one processor coupled to the communication interface. The at least one processor may be configured to extract a first set of motion features of the first movable object from the first image using a first encoder of the learning model and extract a first set of static features of the second movable object from the second image using a second encoder of the learning model. The at least one processor may also be configured to generate a third image by synthesizing the first set of motion features and the first set of static features.
Type:
Grant
Filed:
September 14, 2020
Date of Patent:
November 28, 2023
Assignee:
BEIJING DIDI INFINITY TECHNOLOGY AND DEVELOPMENT CO., LTD.
Inventors:
Zhengping Che, Kun Wu, Bo Jiang, Chengxiang Yin, Jian Tang
Abstract: Provided are a mobile device (100) and computer-implemented method (700) for localisation in an existing map of a 3-D environment. For a first image frame, a first pose is localised based on visual features. For a second image frame, a pose is predicted (810) based on inertial measurements, combined with the pose of the first image frame. Based on the predicted pose, the method predicts (830) a set of landmarks that are likely to be visible. A second pose is then calculated (850), for the second image frame, based on matching (840) visual features of the second image frame to the set of landmarks.
Type:
Grant
Filed:
October 15, 2021
Date of Patent:
November 28, 2023
Assignee:
SLAMcore Limited
Inventors:
Pablo Alcantarilla, Alexandre Morgand, Maxime Boucher
Abstract: Systems and methods for identifying an object and presenting additional information about the identified object are provided. The techniques of the present invention can allow the user to specify modes to help with identifying objects. Furthermore, the additional information can be provided with different levels of detail depending on user selection. Apparatus for presenting a user with a log of the identified objects is also provided. The user can customize the log by, for example, creating a multi-media album.
Abstract: Systems and methods are disclosed for locating a retroreflective object in a digital image and/or identifying a feature of the retroreflective object in the digital image. In certain environmental conditions, e.g. on a sunny day, or when the retroreflective material is damaged or soiled, it may be more challenging to locate the retroreflective object in the digital image and/or to identify a feature of the object in the digital image. The systems and methods disclosed herein may be particularly suited for object location and/or feature identification in situations in which there is a strong source of ambient light (e.g. on a sunny day) and/or when the retroreflective material on the object is damaged or soiled.
Type:
Grant
Filed:
May 31, 2022
Date of Patent:
November 28, 2023
Assignee:
GENETEC INC.
Inventors:
Louis-Antoine Blais-Morin, Pablo Augustin Cassani
Abstract: A method for reducing violence within crowded venues is provided. The method includes reading license plates of vehicles passing into entry ports of a parking area, and capturing facial images of persons seeking admission to the venue. A computer compares such license plates to a database of vehicle license plates associated with persons with past histories of violence. A computer also compares captured facial images to a database of facial data for persons with past violent histories. Upon detecting a match, the computer creates an alert presented to law enforcement officers to facilitate detention of such persons for investigation. Information recorded on entry tickets is scanned and saved together with the facial image of the ticket holder. If a violent act occurs, cameras within the venue capture facial images of participants. The computer matches such participants to stored identifying data to assist in the identification and apprehension of such persons.