Abstract: Embodiments relate to pixel conversion of images for display. A circuit converts input pixel values of an image using a color conversion function. The circuit is operable in different modes where each mode uses a different color conversion function. A lookup table memory circuit stores a mapping of color converted values and input pixel values according to the mode of operation where the mapping represents the color conversion function associated with the mode. The circuit produces a color converted value from the lookup table as a color converted version of a first input pixel value responsive to the first input pixel value being within a first range. The circuit may also produce a color converted version of a second input pixel value by interpolating a subset of the color converted values received from the lookup table responsive to the second input pixel being within a second input range.
Abstract: An image processing apparatus of an embodiment includes an image processing unit to convert a color image to a monochrome image. The color image comprises pixels with a plurality of color components. The image processing unit is configured to generate a histogram from the monochrome image showing a color intensity gradation in the monochrome image by pixel frequency. A processor is configured to obtain a first threshold value based on the histogram, determine for each color component of each pixel in the color image whether or not each color component of the pixel is light based on the first threshold value, and generate a corrected color image by removing a background coloring from the color image by correcting each pixel for which all the color components are determined to be light.
Abstract: Provided are an image transform method and an image transform network. The method is for the image transform network including an image generator, a transform discriminator and a focus discriminator, and includes: generating a transformed image according to an un-transformed image and focus information by the image generator; computing a transform discrimination value according to the transformed image by the transform discriminator; computing a value of a first generator loss function and updating the image generator by the image generator; generating a focus discrimination value according to the un-transformed image, the transformed image, and the focus information by the focus discriminator; and computing a value of a second generator loss function according to the focus discrimination value and updating the image generator according to the value of the second generator loss function by the image generator.
Type:
Grant
Filed:
March 3, 2020
Date of Patent:
February 1, 2022
Assignee:
Industrial Technology Research Institute
Abstract: An image processing apparatus comprises a reading unit configured to read an image of an original; a correction unit configured to correct, based on variance values acquired for a pixel, the pixel in a region where a show-through has occurred, in the image read by the reading unit; a designation unit configured to designate a chromatic color; and a conversion unit configured to convert a pixel included in an image including the pixel corrected by the correction unit into one of a chromatic color and an achromatic color designated by the designation unit.
Abstract: The present disclosure relates generally to signal encoding for elements within PDF files. One implementation encodes an artwork element under different encoding conditions, and selects a winner version based on resulting signal robustness and/or visibility. Other implementations generate PDF layer masks to help determine overall embedding robustness, including interference from layered elements. Other implementations are provided too.
Type:
Grant
Filed:
March 12, 2021
Date of Patent:
January 25, 2022
Assignee:
Digimarc Corporation
Inventors:
Trent J. Brundage, Jerry Allen McMahan, Jr.
Abstract: A method includes providing attributes of a manufacturing process and an image of a product associated with the manufacturing process to a trained machine learning model. The method further includes obtaining, from the trained machine learning model, predictive data. The method further includes determining, based on the predictive data, image measurements of the image of the product associated with the manufacturing process. Manufacturing parameters of the manufacturing process are to be updated based on the image measurements.
Type:
Grant
Filed:
March 29, 2021
Date of Patent:
January 18, 2022
Assignee:
APPLIED MATERIALS, INC.
Inventors:
Abhinav Kumar, Benjamin Schwarz, Charles Hardy
Abstract: A method for generating an image processing filter includes: adjusting; and extracting. The adjusting inputs first training image data into a neural network to generate output image data, calculates an evaluation value based on a loss function using the output image data and second training image data, and adjusts a convolution filter so as to reduce the evaluation value. The extracting extracts data from the adjusted convolution filter as data for the image processing filter. A first training image includes noise and reproduces a test pattern. A second training image includes reduced noise and reproduces the test pattern. The loss function includes a first term and a second term. The first term specifies a magnitude of a difference between the output image data and the second training image data. The second term grows smaller as symmetry of the convolution filter relative to a filter axis of symmetry increases.
Abstract: A color gamut mapping method and apparatus, includes obtaining a to-be-processed image, obtaining lightness and chroma information of a target color gamut, and mapping a lightness value and a chroma value of a pixel in the to-be-processed image to obtain a processed image corresponding to the target color gamut, where a pixel in the processed image has a mapped lightness value and a mapped chroma value, and the mapped lightness value and the mapped chroma value are obtained by mapping the lightness value and the chroma value of the pixel in a color gamut of the to-be-processed image to the target color gamut using a point with a minimum lightness value in the target color gamut as a mapping end point.
Abstract: In some aspects, there is provided a method for determining a rotational orientation of an object in an image. The image depicts the object in a scene. The method includes providing images depicting the object in the scene to a trained statistical model. The images depict the scene of the image at different rotation angles. A rotation angle of a respective image corresponds to a potential rotational orientation of the object depicted in the respective image. The method further includes, in response to the providing, receiving, for each image, a confidence score indicating a likelihood generated by the trained statistical model that the object is at the potential rotational orientation corresponding to the rotation angle of the respective image. The method further includes determining the rotational orientation of the object in the image based at least in part on an analysis of the confidence scores and respective potential rotational orientations.
Type:
Grant
Filed:
March 19, 2020
Date of Patent:
December 28, 2021
Assignee:
Ursa Space Systems Inc.
Inventors:
Poppy Gene Immel, Daniela Irina Moody, Meera Ann Desai
Abstract: An image processing method includes a first conversion step, a second conversion step that, using a second conversion table for converting input image data represented by a second color space to output image data represented by the second color space, converts image data of a second region for which recording conditions are different from a first region in a second image data, and a recording data generating step that generates recording data based on third image data including image data of a first region in the second image data and image data of the second region after conversion using the second conversion table. When adjusting an ink amount, the ink amount is not adjusted in the second conversion step and is adjusted in the first conversion step.
Abstract: An apparatus corrects a piece of pixel data of a first color of which abnormal pixel is detected in a first color without correcting a piece of pixel data of a second color of which abnormal pixel is not detected in the second color, in the generated pieces of pixel data in a case where the abnormal pixel is a pixel other than a pixel located at an edge portion, and corrects a plurality of pieces of pixel data of all colors at a position where the abnormal pixel is detected in the generated plurality of pieces of pixel data in a case where the detected abnormal pixel is the pixel located at the edge portion.
Abstract: The present disclosure relates generally to signal encoding for printed objects such as product packaging, labels and hangtags. One implementation obtains a color image representing CMY color channels, and alters the color image to include an encoded signal by altering values representing CIELAB a* and b*, all the while keeping L* on or within a predetermined tolerance of a contour representing a constant value. Other implementations are provided.
Abstract: An image processing apparatus includes a detection unit, a determination unit, a decision unit, and a processing unit. The detection unit detects an isolated pixel included in an image. The determination unit determines whether the isolated pixel detected by the detection unit changes brighter or darker by edge enhancement processing. The decision unit decides an edge enhancement amount of the isolated pixel based on a determination result of the determination unit. The processing unit performs the edge enhancement processing on the isolated pixel, based on the edge enhancement amount decided by the decision unit.
Abstract: An integrated apparatus includes: an image processing apparatus; an Information Technology (IT) processing apparatus; and a common display operation panel, wherein the integrated apparatus: obtains a workflow that combines a job executed by the image processing apparatus and a job executed by the IT processing apparatus; launches an application and causes each of the image processing apparatus and the IT processing apparatus to execute the job indicated in the workflow; and switches, based on a determination criterion related to a function exhibited by the image processing apparatus, display by the display operation panel at a time when the job indicated in the workflow is executed by the image processing apparatus, to the first screen or the second screen.
Abstract: A method may include obtaining a first byte stream from first document code and a second byte stream from second document code. The first document code has a document type and the second document code has the document type. The method may further include identifying, in the first byte stream, nonvisual noise corresponding to a custom byte code defined in a custom character encoding set. The nonvisual noise is invisible when rendering the first document code. The method may further include replacing, in the first byte stream, the custom byte code with at least one standard byte code defined in a standard character encoding set to obtain modified document code. The second document code uses the standard character encoding set. The method may further include comparing the modified document code with the second document code by comparing the first byte stream with the second byte stream.
Abstract: A system may include an imaging system coupled to a host subsystem. The imaging system may include an image sensor that provides image frames to the host subsystem. The image sensor may include a data authentication subsystem that appends corresponding authentication data to each of the image frames. Each set of authentication data may be generated based on a subset of the image frame data (e.g., corresponding to image data generated by pixels defined by a sparse region-of-interest within the pixel array). The host subsystem may securely provide region-of-interest parameters to the image sensor to update the sparse region-of-interest in an adaptive manner to account for factors such as computational load of the host subsystem and authentication coverage for the entire pixel array.
Abstract: An inspection device includes an image reader that generates a scan image, an inspector that inspects an inspection target image formed on a storage medium, by comparing a scan image with a reference image, a storage that stores the reference image, and a hardware processor that stores into the storage the reference image generated as the scan image by reading with the image reader an image formed on a recording medium by the image former on a basis of a second print job for generating the reference image, determines whether the reference image stored in the storage satisfies a predetermined condition when a data amount of the reference image stored in the storage becomes equal to or larger than a predetermined amount, and deletes the reference image determined to satisfy the predetermined condition from the storage.
Abstract: A color accuracy verification device includes: a hardware processor that: acquires a colorimetric value for each color patch of color accuracy verification charts generated by a plurality of printers; stores the colorimetric value in time series; sets a target printer to carry out color accuracy verification; and verifies color accuracy based on: the colorimetric value of the target printer stored in the hardware processor; and a verification reference value set in advance; and a display that displays a verification result by the hardware processor for the target printer.
Abstract: A granularity check chart is printed and granularity is evaluated for each area representing a combination of a color value of blue and a color value of cyan and magenta. Thereafter, an unusable region regarding combinations of a color value of blue and a color value of cyan and magenta is determined on the basis of evaluation results. Then, corresponding relationships among a color value of cyan and magenta before spot color separation, a color value of blue after the spot color separation, and a color value of cyan and magenta after the spot color separation are determined so that each combination of a color value of blue and a color value of cyan and magenta after the spot color separation is not included in the unusable region, and a spot color separation LUT is created so that the corresponding relationships are satisfied.
Abstract: To make it possible to specify a color space of image data even in a case where color space identification information is not attached to the image data. The image processing system that handles image data whose color spaces are different comprises a learning unit configured to perform machine learning by taking image data and metadata thereof as input data and color space identification information as supervised data. In the machine learning, a learning model is optimized so that a deviation between the color space predicted from the input data and the color space specified by the color space identification information becomes a minimum.