Abstract: A super-resolution reconstruction preprocessing method of contrast-enhanced ultrasound images includes: acquiring an image set to be preprocessed; acquiring grayscale fluctuation signal of a pixel point in the registered contrast-enhanced ultrasound images to be preprocessed; performing denoise reconstruction operation on the image set to be preprocessed to obtain a reconstructed feature parameter image, and performing interpolation calculation on the reconstructed feature parameter image to obtain a sparse microbubble image. By analyzing the grayscale fluctuation signals of the collocated pixel point set in the plurality of frames of the registered contrast-enhanced ultrasound images to be preprocessed, a signal-to-noise ratio and a signal-to-background ratio are improved.
Abstract: An information processing apparatus includes a controller configured to determine, upon detection of an impact applied to an object, whether the impact has caused damage to the object, and when it is determined that damage has been caused, identify a cause of the damage, based on a result of observation of a surrounding environment of the object at a time that the impact was applied.
Abstract: An electronic device includes an image sensor configured to capture a target to generate first image data, and a processor configured to perform directional interpolation on a first area of the first image data to generate first partial image data, perform upscale on a second area of the first image data to generate second partial image data, and combine the first partial image data and the second partial image data to generate second image data.
Abstract: In some implementations, a device may obtain a set of input images, of an object of interest, that includes images that have a plain background. The device may obtain a set of background images that includes images associated with the object of interest. The device may generate, for an input image, a first modified image of the input image that removes a plain background of the input image. The device may generate, for the input image, a second modified image of the input image that is based on the first modified image and a background image. The device may generate, for the input image, a training image that includes an indication of a location of the object of interest depicted in the training image. The device may provide the training image to a training set, that includes a set of training images, for a computer vision model.
Abstract: Techniques for segmenting sensor data are discussed herein. Data can be represented in individual levels in a multi-resolution voxel space. A first level can correspond to a first region of an environment and a second level can correspond to a second region of an environment that is a subset of the first region. In some examples, the levels can comprise a same number of voxels, such that the first level covers a large, low-resolution region, while the second level covers a smaller, higher-resolution region, though more levels are contemplated. Operations may include analyzing sensor data represented in the voxel space from a perspective, such as a top-down perspective. From this perspective, techniques may generate masks that represent objects in the voxel space. Additionally, techniques may generate segmentation data to verify and/or generate the masks, or otherwise cluster the sensor data.