Storage and processing service network for unrendered image data

A system operates on a computing resource that is accessible to a plurality of clients via a communication network. A storage service of the system operates on the computing resource and is configured to store unrendered image data corresponding to scenes. The unrendered image data may be, for example, raw image data or scene colorimetric data. A rendering service of the system also operates on the computing resource and is configured to process the unrendered image data to generate rendered images. In particular, the rendering service processes the unrendered image data responsive to requests from the clients communicated over the communication network, and the rendering processing is based on rendering parameters determined in accordance with the client requests. In addition, a printing service may be provided to transform the rendered images into a tangible form, in accordance with printing requests from the clients.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
TECHNICAL FIELD

[0001] The present invention relates to a service network for digital storage, retrieval, and processing of photographic image data and, more particularly, to a network of servers and services based on storage of unrendered scene data and on server-based processing to render the scenes for viewing, printing, and retrieval.

BACKGROUND

[0002] Photography is a complex art in which the final products—photographs—are produced (typically on paper) from photo-optical information about subject scenes, as sensed by photographic film with the aid of a camera. It has long been recognized, e.g. by L. A. Jones in the 1930s, that a rendered reproduction whose brightness or reflectance ratios objectively matches the scene luminance ratios is not a very “good” photograph. Rather, it is desirable to “render” measured scene luminance to an artistically and psycho-visually preferred reproduction. In the field of black-and-white photography, this approach was developed to a high degree by Ansel Adams with his “zone system” of photography. A combination of techniques involving the camera, the scene lighting, the film, the film developing, and the film printing give the photographer a great deal of control over how a scene is rendered to produce an aesthetically pleasing photographic reproduction. The same techniques, and more, are applied in color photography.

[0003] Even standard “one-hour” or “drugstore” photo processing labs, and the design of standard films and papers, incorporate years of experience of making rendering decisions in an attempt to give the user an acceptable reproduction for most typical situations. Modern digital cameras and scanners similarly incorporate automatic rendering intelligence in an attempt to deliver acceptable image files—at least in the case of typical scenes. These approaches take the rendering decisions out of the hands of the photographer and, therefore, are not universally acceptable to serious professional and amateur photographers. Furthermore, the image files thus produced lead to degraded images if the tone curve is subsequently modified, as one might do with an image editing program such as Adobe Photoshop. This degradation is due to re-quantization noise. The degradation is very bad if the file is stored using lossy image compression such as JPEG, but is still significant even for files stored as uncompressed RGB data such as 8-bit TIFF, especially if the initial rendering has caused clipping of highlights, shadows, or colors.

[0004] Networked storage, retrieval, processing, and printing services related to image data are known in the art. Conventionally, images uploaded to such centralized services are generally regarded as “rendered images”, such as are typically generated locally by image acquisition devices such as digital cameras or by film scanners; that is, the uploaded digital image data are representations of an intended reproduction, such as a print or a screen display. Adjustments to the rendered images may be supported within the service, but rendering decisions that have already been committed in the image acquisition device limit the range of adjustments possible, and limit the quality of reproductions that differ from the original rendering (since information of the original scene has been eliminated).

[0005] Digital cameras that save raw (unrendered) scene data are also known in the art. By saving raw scene data, instead of processed rendered images, flexibility is retained to generate high-quality rendered images later, after interactive specification of rendering parameters such as tone curves, sharpening, and color adjustments. Conventionally, software to perform such interactive specification of rendering parameters, and rendering from the raw scene data according to the specified rendering parameters, is typically provided in a unique form by each camera manufacturer that offers a “raw data file” option. Such raw data files are not widely readable or interpretable, and therefore are not generally compatible with networked services.

[0006] For example, professional digital cameras such as the Foveon II camera manufactured and sold by Foveon, Inc. of Santa Clara, Calif., have the capability to save raw data files locally (e.g., on an attached computer). A corresponding processing program, such as FoveonLab for the Foveon II camera, allows the user to control the rendering interactively. Since the program runs on the user's computer, that computer needs direct access to the raw data, thereby requiring that the user either manage the raw file storage locally, or requiring that the data be accessed from a storage server at a reasonable data rate to support the interaction.

[0007] In addition, after the user renders the image to an output file, of the desired artistic style and quality level, the rendered image is typically sent again over the network to a print server, at a remote site, to obtain a high-quality print. The user may also manage the mapping, or re-rendering, of data representing his desired reproduction into a specialized colorspace for the target printer, and therefore may need to manage several different output files to target different printers. The proliferation of raw and rendered files for the user to create, store, view, manage, and send to others creates an undesired complexity.

[0008] What is desired is a system to flexibly and efficiently manage quality image rendering from raw image data.

SUMMARY

[0009] The invention includes a system operating on a computing resource that is accessible to a plurality of clients via a communication network. A storage service of the system operates on the computing resource and is configured to store unrendered image data corresponding to scenes. The unrendered image data may be, for example, raw image data or scene colorimetric data. A rendering service of the system also operates on the computing resource and is configured to process the unrendered image data to generate rendered images. In particular, the rendering service processes the unrendered image data responsive to requests from the clients communicated over the communication network, and the rendering processing is based on rendering parameters determined in accordance with the client requests.

[0010] In addition, a printing service may be provided to transform the rendered images into a tangible form, in accordance with printing requests from the clients.

BRIEF DESCRIPTION OF THE DRAWINGS

[0011] FIG. 1 is a block diagram illustrating a centralized service network to store and process raw image data according to an embodiment of the present invention.

[0012] FIG. 2 is a block diagram illustrating a rendering process flow utilized in the rendering service of the FIG. 1 service network according to an embodiment of the present invention.

DETAILED DESCRIPTION

[0013] The following description is illustrative only and not in any way limiting. It is intended that the scope of the invention be determined by the claims.

[0014] Before describing specific embodiments in accordance with the invention, it is useful to discuss data formats for unrendered image data. Unrendered image data typically falls into one of two rather distinct classifications: raw image data and scene colorimetric data. Raw image data refers to a representation of a scene as obtained by the sensor system of an image capture device such as a camera. By contrast, scene colorimetric data refers to a representation of a scene by which raw image data representing the scene has been converted into a standardized colorimetric space, in which each picture element (pixel) has an n-tuple (usually a triple) of values that has a defined one-to-one relationship with an XYZ triple representing an absolute color.

[0015] Several organizations have proposed imaging architectures based on scene colorimetric data, and have proposed specific colorspace representations for that purpose. For example, the standards committee ISO/TC42/WG18 has proposed (in WD4 of draft standard ISO 17321) a colorspace called ISO RGB to represent all XYZ triples as RGB triples, using the same primary chromaticities as the sRGB standard or the ITU-R BT.709 standard. The RGB values are allowed to be negative in order to allow a wide gamut of colors to be represented. As another example, Kodak has proposed a wide-gamut RGB space called RIMM for unrendered scene data, as a complement for a space called ROMM for rendered reproduction data; these colorspaces use nonnegative RGB values.

[0016] Conversion of raw image data to scene colorimetric data is typically complex, except possibly in the case where the color channel sensitivity curves of the sensor of the image capture device meet the Luther condition, i.e., are non-degenerate linear combinations of the CIE standard observer spectral curves. If the Luther condition is met, and if a sensor measures all three colors at each image pixel location, then the raw image data is already scene colorimetric data; and such data can be converted to another (standard) colorimetric space easily by multiplying each linearized RGB triple by a well-defined 3×3 matrix.

[0017] Most cameras do not come very close to meeting the Luther condition; typically a scene colorimetric XYZ triple or equivalent is imputed to each image pixel location by a 3×3 matrix multiply as noted above. However, the optimal matrix to use is not well defined, and depends on the illuminant, the spectra of colored materials in the scene, and preferences as to the pattern of color errors that is acceptable.

[0018] Maximum information about a scene is preserved by storing raw image data along with a description of the spectral sensitivity curves associated with the image capture device and any descriptive information about the scene that is available, such as illuminant type and subject type (e.g., portrait, rockscape, or flower scene). Approximate conversion to a scene colorimetric space inevitably results in loss of some information. Therefore, it is desirable to save and process the raw image data. In some embodiments, scene colorimetric data is saved, or both raw image data and scene colorimetric data may be saved.

[0019] Raw image data is processed based on the type of image capture device that generates it—both the general type and the characteristics of the specific image capture device. Different device types may employ different file formats and representations, and different specific devices may have slightly different spectral sensitivity curves and other characteristics for which calibration data may be obtainable.

[0020] In many digital cameras, color measurements are made through a filter mosaic, with one color filter at each location in a sensor array. Before the raw data is converted to imputed colorimetric values, each location in the array is given a value for the two out of three color channels that were not measured there. This process is known as de-mosaicing or interpolation. Scene colorimetric data for in image generated by such a camera therefore has three times as many data values as the original raw image data from which it is derived, increasing the amount of space required to store it (or requiring complex compression processes to reduce the amount of storage space required).

[0021] The de-mosaicing and other preprocessing operations for a digital camera of a particular type have many variants, including some that utilize proprietary algorithms of the camera companies themselves. Conventionally, such companies typically provide such algorithms either embedded within processing in the camera, or in computer programs compatible with either Windows or Macintosh computers or both (other platforms are usually left with no support for using raw image files from such cameras). In some cases, the algorithms are provided as part of plug-in modules for widely used software packages such as web browsers or image editing programs. These approaches to making software available for processing their raw image files make it difficult to have their raw files be widely acceptable or useable, since it is difficult for a large community of users to keep up with all the updates and plug-in software packages for processing a variety of raw image data.

[0022] Other digital cameras provide raw image data in the form of “full-measured color,” including a measured red, green, and blue sensor channel reading at each and every pixel location in an array. For example, the Foveon II camera separates an optical image through a color separating beam-splitter prism assembly and captures three full arrays of optical intensity values to make up a full-measured RGB raw data file. The resulting raw data files are large, but provide correspondingly higher image quality when processed and rendered. Such files naturally require different pre-processing operations than files from color filter mosaic cameras. Future cameras using solid-state sensors that measure three color channels at each location, using techniques such as disclosed in U.S. Pat. No. 5,965,875, may also produce full-measured color raw image data files.

[0023] Now that image data formats for image data has been discussed, attention is turned to describing an embodiment of the invention. Broadly speaking, in accordance with the present invention, the storing and processing associated with raw image data is centralized in a widely accessible networked service. Individual users do not need to update any software to make use of a new camera type or file format. Only the centralized service needs to update the centralized processing (by, for example, obtaining software updates from the camera manufacturers), and camera manufacturers need not support multiple computing platforms for storing and processing image data.

[0024] Referring now to FIG. 1, a service network 15 according to an embodiment of the present invention comprises a storage service 12, a rendering service 14 and a printing service 16.

[0025] The storage service 12 stores raw image data submitted by a client 18 via the communication network 20 as indicated by connection path 26. (It is to be understood that the connection path 26, like the other “connection paths” disclosed in FIG. 1, may not be permanent connections and may be, for example, ephemeral connections such as may occur over the internet.) In some embodiments, scene colorimetric data is also stored by the storage service 12. The client 18 can also retrieve files via connection path 26. The client 18 may be, for example, a standalone computing device such as desktop computer or it may be, for example, an image capture device capable of connecting directly to the communication network 20 for communication with the service network 15.

[0026] The rendering service 14 is in communication with the storage service 12 to receive raw image data from the storage service 12 via connection path 22 in response to requests submitted by the client 18 via the communication network 20 as indicated by connection path 28. The connection 22 (and the connection 24, discussed later) is shown as being via the communication network 20, but may be by other means, such as a communication network other than the communication network 20, or need not even be via a communication network at all, for example, if several of the services are run on the same computer. The rendering service 14 processes the raw image data received from the storage service 12 to, based on rendering parameters, generate a corresponding rendered image. The client 18 may be in communication with the rendering service 14 via the connection path 28 to preview rendered images and interactively adjust and review rendering parameters.

[0027] Client 18 may also communicate with the printing service 16 via the communications network 20 as indicated by connection path 30 to request that the rendered images generated by the rendering service 14 be printed or otherwise embodied on a tangible medium (e.g., “burned” onto a compact disc).

[0028] Other embodiments may employ different paths than illustrated in FIG. 1, and services of the service network 15 may be implemented on a computing resource such as one or more communicating processes, on one or more computers or other processing devices. Typically, such communication is via a communications network, such as a local-area network, the internet, the public switched telephone network, or a combination of such communications networks.

[0029] Raw image data stored by the storage service 12 preferably includes sufficient identifying information and characterization information for the servers to read and interpret the data, to find the appropriate preprocessing routines and information regarding conversion to scene colorimetric data. Such conversion information may include a calibration matrix, or several matrices for different illuminants, or spectral sensitivity curves from which such matrices may be computed according to some specified color error preferences. The storage service 12 accepts raw image data from a variety of sources, such as the raw image file formats of different manufacturers or of different camera families. Each such source may have different file formats and different kinds of additional characterization, calibration, and scene information associated with it. For each different raw image data type, corresponding preprocessing routines of the storage service 12 convert raw image data of that type to imputed scene colorimetric data for further rendering.

[0030] Referring now to FIG. 2, a particular embodiment of the rendering service 14 is described. As shown in FIG. 2, the rendering service 14 is implemented as a pipelined sequence of image processing steps. Some or all of the steps shown in FIG. 2, each including operations on all the pixel values, are included. Turning now to the specific steps, Bad Pixel Replacement 32 replaces known bad pixel values with reasonable computed values (e.g., based on a neighborhood average). Noise Abatement 34 is a collection of nonlinear filtering operations such as despeckling, dequantization, and chroma blur to incorporate known information about typical image statistics and about typical noise statistics of the image capture device from which the raw image data was generated, and to remove some amount of noise-like structure, possibly with user adjustment of tradeoffs. Color Matrix 36 is a colorspace conversion operation that may include not just a colorimetric transformation to a different space, but also user preferred saturation, white balance, and other color adjustments, including an overall exposure compensation. Tone Curves 38 is a nonlinear mapping of the lightness scale to give user preferred contrast, highlight compression, shadow compression, and perhaps even special effects such as posterization. Sharpening 40 enhances the high-frequency detail of an image, to give it a sharper look or to precompensate for blurring that will occur in a subsequent printing operation. These steps may be varied, and different ordering and combinations of them included. Preferably, the user has control over the steps that affect the final appearance of the rendered image.

[0031] The rendering process according to embodiments of the present invention also includes some or all of the steps of cropping, conversion to a printer colorspace, and resampling to an output resolution. The conversion to a printer colorspace may be implemented, for example, via a matrixing computation or a multidimensional look-up table (for example, as utilized by color management engines that use ICC profiles for colorspace conversion).

[0032] Typically, the rendering parameters may be interactively adjusted by a client 18 of the service network 15 by being provided (via link 28), typically with the user viewing low-resolution previews and zoomed-in windowed previews of the effects of the rendering adjustments in real time. After the user has settled on particular user-adjustable parameters, the client may elect to download a larger higher-resolution rendered image, or may direct the printing service 16 of the service network 15 to produce a print based on the rendered image data provided via the communications network 20 via link 24 from the rendering service 14 to the printing service 16. The service network 15 may store the resulting rendered image data for later re-use, or may simply store the parameters for recreating the rendered image data from the raw image data.

[0033] Besides adjusting the rendering according to the client's preferences, the rendering service 14 may make some adjustments automatically. For example, a controlled degree of sharpening may be applied to compensate the known blurring characteristics of a printer operated as part of the printing service 16, even though the client has no specific knowledge of the printer to be used. An another example, the tone curves and colors may be adjusted to compensate for known printer characteristics when the client requests a print of the printing service 16.

[0034] Other adjustments that are available to the artistic client may preferably also be automatically adjusted when the client so desires. For example, a client may wish to have a large set of images automatically rendered and printed, in much the same way as is now done by photo processing labs that serve consumers, rather than addressing each image individually.

[0035] The service network 15 also includes an indexing and searching service. The indexing and searching processes image meta-data available in the raw image data, when appropriate readers for the meta-data in the file format are available. Searching based on image form, color, and subject content is also supported.

[0036] The storage service 12 includes multi-level storage, wherein small image “thumbnails” may be accessed, viewed, and searched quickly, while larger files may need longer to retrieve from secondary, tertiary, or offline storage. A system of storage service fees reflects storage policy options selected by the client, so that fast access to a large collection is possible, but costs more.

[0037] The printing service 16 provides flexible final product production and delivery options. For example, the service may be utilized as a simple print service, allowing clients to order prints by simply paying for the printing; the prints are then mailed to the buyer. Other more flexible options include a way for a buyer to direct the output to a mailing list, so that a client can, for example, design and mail out seasonal greeting cards directly from the service. The service may also act as an agent for the client photographic artist or art owner, allowing buyers to purchase copies at prices set the owner, with payment beyond the printing cost being delivered back to the art owner.

[0038] While embodiments and applications of this invention have been shown and described, it would be apparent to those skilled in the art that many more modifications than mentioned above are possible without departing from the inventive concepts herein. The invention, therefore, is not to be restricted except in the spirit of the appended claims.

Claims

1. A system, comprising:

a computing resource accessible to a plurality of clients via a communication network;
a storage service operating on the computing resource configured to store unrendered image data corresponding to scenes; and
a rendering service operating on the computing resource configured to process the unrendered image data to generate rendered images,
wherein the rendering service processes the unrendered image data responsive to requests from the clients, communicated over said communications network, based on rendering parameters determined in accordance with the client requests.

2. The system of claim 1 in which said unrendered image data comprise raw image data.

3. The system of claim 1, wherein:

the storage service is also configured to store, along with the unrendered image data, indications of characteristics of the unrendered image data; and
the rendering service is configured to process the unrendered image data based, at least in part, on the indications of characteristics.

4. The system of claim 1, wherein:

the unrendered image data comprises raw image data and corresponding colorimetric data; and
the unrendered image data which the storage service is configured to store includes both the raw image data and the corresponding colorimetric data.

5. The system of claim 4, wherein:

the storage service is configured to receive the raw image data; and
the storage service is configured to process the raw image data into corresponding colorimetric data.

6. The system of claim 1, and further comprising:

a printing service operating on the computing resource configured to generate, based on the rendered images, tangible embodiments corresponding to the rendered images.

7. The system of claim 6, wherein the tangible embodiments include prints.

8. The system of claim 6, wherein the tangible embodiments include computer-readable media.

9. The system of claim 6, wherein the rendering service adjusts the rendered images to account for characteristics of the tangible embodiment generation process.

10. The system of claim 9, wherein the characteristics include characteristics of a printer.

11. The system of claim 1, wherein the rendering service is configured to interact with clients to determine the rendering parameters.

12. The system of claim 1, wherein the storage service is configured to receive the unrendered image data from the clients via the communication network.

13. The system of claim 2, wherein the storage service is configured to:

receive photographic film having images of the scenes embodied thereon; and
process the photographic film to generate the raw image data.
Patent History
Publication number: 20030035653
Type: Application
Filed: Aug 20, 2001
Publication Date: Feb 20, 2003
Inventors: Richard F. Lyon (Los Altos, CA), Allen H. Rush (Danville, CA)
Application Number: 09933545
Classifications
Current U.S. Class: Camera Combined With Or Convertible To Diverse Art Device (396/429)
International Classification: G03B017/48;