Abstract: Embodiments are disclosed for a machine learning-based chroma keying process. The method may include receiving an input including an image depicting a chroma key scene and a color value corresponding to a background color of the image. The method may further include generating a preprocessed image by concatenating the image and the color value. The method may further include providing the preprocessed image to a trained neural network. The method may further include generating, using the trained neural network, an alpha matte representation of the image based on the preprocessed image.
Type:
Grant
Filed:
April 21, 2022
Date of Patent:
October 1, 2024
Assignee:
Adobe Inc.
Inventors:
Seoung Wug Oh, Joon-Young Lee, Brian Price, John G. Nelson, Wujun Wang, Adam Pikielny
Abstract: A method and an apparatus for text effect processing, an electronic device, and a computer readable storage medium. The method comprises: sending to a server a request to acquire a text effect resource, the text effect resource being used to implement a display effect of text associated with multimedia; receiving a text effect resource sent by the server; and on the basis of the text effect resource, performing color-separated effect processing on text, causing text of different color components to be synchronously and dynamically displayed on a terminal screen, following playback progress of the multimedia.
Abstract: In a method for processing microscope images, at least one microscope image is provided as input image for an image processing algorithm. An output image is created from the input image by means of the image processing algorithm. The creation of the output image comprises adding low-frequency components for representing solidity of image structures of the input image to the input image, wherein the low-frequency components at least depend on high-frequency components of these image structures and wherein high-frequency components are defined by a higher spatial frequency than low-frequency components. A corresponding computer program and microscope system are likewise described.
Type:
Grant
Filed:
May 11, 2021
Date of Patent:
February 14, 2023
Assignee:
Carl Zeiss Microscopy GmbH
Inventors:
Manuel Amthor, Daniel Haase, Markus Sticker
Abstract: A multidimensional system for generating a multimedia search engine is provided. A computer device identifies a plurality of independently separable aspects of a multimedia file. The computing device provides at least one independently separable aspect of the plurality of independently separable aspects as input into an object detection model. The computing device receives, from the object detection model, an identification of at least one object and a corresponding level of confidence that the object is present in the multimedia file. The computing device classifies the object as either confident or not confident, based on whether the level of confidence meets a threshold level of confidence. The computing device generates a multimedia search engine based, at least in part, on the object and the classification.
Type:
Grant
Filed:
May 7, 2020
Date of Patent:
December 13, 2022
Assignee:
International Business Machines Corporation
Abstract: This disclosure relates generally to method and system for tracking motion of subjects in three dimensional space. The method includes receiving a video of the environment using a scene capturing device positioned in the environment. A motion intensity of subjects from the plurality of image frames are detected for segregating the motion of subjects present in each image frame from the plurality of image frames into a plurality of categories. Further, a three dimensional (3D) scene from the plurality of image frames are constructed using the multi focused view based depth calculation technique. The subjects are tracked based on the position in three dimensional (3D) scene categorized under the significant motion category. The proposed disclosure provides efficiency in tracking the new entry of subjects in the environment for adjusting the focus of observer.