Abstract: A method and system for high-performance real-time adjustment of colors and patterns on one or more pre-selected elements in a playing video, interactive 360° content, or other image, using graphics processing unit (“GPU”) shader code to process the asset per-pixel and blend in a target color or pattern based on prepared masks and special metadata lookup tables encoded visually in the video file. The method and system can generate asset-specific optimized GPU code that is generated from templates. Pixels are blended into the target asset using the source image for shadows and highlights, one or more masks, and various metadata lookup-tables, such as a texture lookup-table that allows for changing or adding patterns, z-depth to displace parts of the image, or normals to calculate light reflection and refraction.