Abstract: This disclosure relates to a method for rendering images. First, a user request is received from a user interface to access an image effect renderer recipe, comprising conditional logic and non-visual image data, from an effect repository. Next, at least one image signal is received. Objects are identified within the image signal(s). The image effect renderer recipe is processed via an effect renderer recipe interpreter to generate image processing steps and image processing prioritizations. The image processing steps are then ordered in accordance with the image processing prioritizations. Next, an image processor applies the image processing steps to the identified objects of the image signal(s) to generate at least one processed image signal. The processed image signal(s) are then displayed on a display device.
Abstract: This disclosure relates to a method for rendering images. First, a user request is received from a user interface to access an image effect renderer recipe, comprising conditional logic and non-visual image data, from an effect repository. Next, at least one image signal is received. Objects are identified within the image signal(s). The image effect renderer recipe is processed via an effect renderer recipe interpreter to generate image processing steps and image processing prioritizations. The image processing steps are then ordered in accordance with the image processing prioritizations. Next, an image processor applies the image processing steps to the identified objects of the image signal(s) to generate at least one processed image signal. The processed image signal(s) are then displayed on a display device.
Abstract: A method of applying an image effect based on recognized objects involves capturing an imaging area comprising at least one object as an image stream through operation of an image sensor. The method recognizes the at least one object in the image stream through operation of an object detection engine. The method communicates at least one correlated image effect control to an image processing engine, in response to the at least one object comprising an optical label. The method communicates at least one matched image effect control to the image processing engine, in response to receiving at least a labeled image stream at an image effect matching algorithm from the object detection engine. The method generates a transformed image stream displayable through a display device by applying at least one image effect control to the image stream through operation of the image processing engine.
Abstract: A process for operating a machine guided photo and video composition system involves generating processed image data. The process operates an object detection engine to identify objects and object locations in the processed image data. The process operates a computer vision analysis engine to identify geometric attributes of objects. The process operates an image cropping engine to select potential cropped image locations within the processed image data. The image cropping engine generates crop location scores for each of the potential cropped image locations and determine highest scored cropped image location. The image cropping engine communicates a highest crop location score to a score evaluator gate. The process generates object classifications from the object locations and the geometric attributes. The process receives device instructions at a user interface controller by way of the score evaluator gate. The method displays device positioning instructions through a display device.