Systems and methods for display list management

-

Systems and methods consistent with some embodiments of the present invention provide data structures and methods for the automatic storing, manipulating and processing of a intermediate printable data generated from a first printable data are presented. In some embodiments, the first printable data may take the form of a PDL description of a document and the intermediate printable data may take the form of a display list of objects generated from the PDL description. In some embodiments, a data structure for storing a intermediate printable data generated from a first printable data may comprise at least one memory pool, which may further comprise a plurality of uniformly sized segments to store the intermediate printable data; at least one global structure for storing information related to the one or more memory pools; and buffers for performing operations on the first printable data and the intermediate printable data.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
BACKGROUND

1. Field of the Invention

The present invention relates to the field of printing and in particular, to systems and methods to manage display lists.

2. Description of Related Art

Document processing software allows users to view, edit, process, and store documents conveniently. Pages in a document may be displayed on screen exactly as they would appear in print. However, before the document can be printed, pages in the document are often described in a page description language (“PDL”). As used in this application PDL's may include PostScript, Adobe PDF, HP PCL, Microsoft XPS, and variants thereof as well as any other languages used to describe pages in a document. A PDL description of a document provides a high-level description of each page in a document. This PDL description is often translated to a series of lower-level printer-specific commands when the document is being printed.

The translation process from PDL to lower-level printer-specific commands may be complex and depend on the features and capabilities offered by a particular printer. A printer manufacturer will often make several trade-offs in order to optimize printer performance based on available memory, desired print speed, and other cost and performance issues. For example, a manufacturer may choose to limit the maximum resolution of a printer with limited memory in order to improve the throughput of the printer. Because the eventual trade-offs are model specific, the introduction of new printers or upgrades to existing printers often require revisiting the optimization decisions and may even result in a major overhaul of translation algorithms. Consequently, the introduction of new printer models, or upgrades to existing printers, may be more expensive, sub-optimal, and needlessly delayed. Moreover, the optimal use of printer resources allows a more complete exploitation of available printer capability and leads to good performance. Thus, there is a need for flexible and portable general-purpose schemes to translate PDL descriptions of documents to printer-specific commands.

SUMMARY

In accordance with the present invention, systems and methods for the automatic storing, manipulating, and processing of a second or intermediate form of printable data generated from a first printable data are presented. In some embodiments, the first printable data may take the form of a PDL description of a document and the intermediate printable data may take the form of a display list of objects generated from the PDL description.

In some embodiments, a computer memory comprising a data structure for storing an intermediate printable data generated from a first printable data may comprise at least one memory pool, which may further comprise a plurality of uniformly sized segments to store the intermediate printable data; at least one global structure for storing information related to the one or more memory pools; and buffers for performing operations on the first printable data and the intermediate printable data. In some embodiments, the uniformly sized segments may occupy contiguous memory locations. In some embodiments, the uniformly sized segments may have a predetermined size. In some embodiments, uniformly sized segments may comprise of either control data structures, or a block to store objects present in the intermediate printable data.

In some embodiments, the control data structure may comprise one or more of virtual pages, bands, and nodes. In some embodiments, the vpage data structure may comprise a linked list of geometric band data structures representing geometric bands associated with the virtual page, wherein the geometric band data structure may further comprise a linked list of node data structures, wherein the node data structure may include references to any associated blocks. In some embodiments, the data managed by the geometric band may contain offsets to objects stored in a reference band. In some embodiments, the reference band may be configured to hold to data or graphical objects. In some embodiments, the data or graphical objects span geometric band boundaries. In some embodiments, the data or graphical objects may be repeatedly used. In some embodiments, the vpage data structure may include offsets to a compression band. In some embodiments, the compression band may be configured to hold compressed data or compressed graphical objects.

In some embodiments, the objects in the bands, nodes, and blocks may be individually accessed, edited, and processed. Processing of the objects may include one or more of the operations of compression, decompression, and pre-rasterization. In some embodiments, each block may hold one or more of a data object, a graphical object, and/or offsets to data or graphical objects. In some embodiments, each block may be uniformly-sized, and equal in size to the uniformly-sized segments.

In some embodiments, the global structure pertaining to the at least one memory pool may include fields for one or more of offsets to the start of the at least one memory pool; mechanisms to regulate access to the at least one memory pool; offsets to the start of the buffers; and lists of vpages, bands, nodes, and blocks. In some embodiments, the buffers may include one or more of compression buffers, decompression buffers, and compaction buffers. In some embodiments, the memory pool may include a base memory pool, which may be allocated at boot time and may also include additional dynamically allocated memory pools.

Embodiments of the present invention also relate to data structures created, stored, accessed, or modified by processors using computer-readable media or computer-readable memory.

These and other embodiments are further explained below with respect to the following figures.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 shows a block diagram illustrating components in a system for printing documents according to some embodiments of the present invention.

FIG. 2 shows a high level block diagram of an exemplary printer.

FIG. 3 shows an exemplary high-level architecture of a system for flexible display lists according to some embodiments of the present invention.

FIG. 4 shows an exemplary data structure for flexible display lists according to some embodiments of the present invention.

FIG. 5 shows the organization of exemplary memory pools according some embodiments of the present invention.

FIG. 6 shows a flowchart for an exemplary algorithm to build a virtual page.

DETAILED DESCRIPTION

In accordance with embodiments reflecting various features of the present invention, systems and methods for the automatic storing, manipulating, and processing of a second or intermediate form of printable data generated from a first printable data are presented. In some embodiments, the first printable data may take the form of a PDL description of a document and the intermediate printable data may take the form of a display list of objects generated from the PDL description.

FIG. 1 shows a block diagram illustrating components in a system for printing documents according to some embodiments of the present invention. A computer software application consistent with the present invention may be deployed on a network of computers, as shown in FIG. 1, that are connected through communication links that allow information to be exchanged using conventional communication protocols and/or data port interfaces.

As shown in FIG. 1, exemplary system 100 includes computers including a computing device 110 and a server 130. Further, computing device 110 and server 130 may communicate over a connection 120, which may pass through network 140, which in one case could be the Internet. Computing device 110 may be a computer workstation, desktop computer, laptop computer, or any other computing device capable of being used in a networked environment. Server 130 may be a platform capable of connecting to computing device 110 and other devices (not shown). Computing device 110 and server 130 may be capable of executing software (not shown) that allows the printing of documents using printers 170.

Exemplary printer 170 includes devices that produce physical documents from electronic data including, but not limited to, laser printers, ink-jet printers, LED printers, plotters, facsimile machines, and digital copiers. In some embodiments, printer 170 may also be capable of directly printing documents received from computing device 110 or server 130 over connection 120. In some embodiments such an arrangement may allow for the direct printing of documents, with (or without) additional processing by computing device 110 or server 130. In some embodiments, documents may contain one or more of text, graphics, and images. In some embodiments, printer 170 may receive PDL descriptions of documents for printing. Note, too, that document print processing can be distributed. Thus, computing device 110, server 130, and/or the printer may perform portions of document print processing such as half-toning, color matching, and/or other manipulation processes before a document is physically printed by printer 170.

Computing device 110 also contains removable media drive 150. Removable media drive 150 may include, for example, 3.5 inch floppy drives, CD-ROM drives, DVD ROM drives, CD±RW or DVD±RW drives, USB flash drives, and/or any other removable media drives consistent with embodiments of the present invention. In some embodiments, portions of the software application may reside on removable media and be read and executed by computing device 110 using removable media drive 150.

Connection 120 couples computing device 110, server 130, and printer 170 and may be implemented as a wired or wireless connection using conventional communication protocols and/or data port interfaces. In general, connections 120 can be any communication channel that allows transmission of data between the devices. In one embodiment, for example, the devices may be provided with conventional data ports, such as parallel ports, serial ports, Ethernet, USB, SCSI, FIREWIRE, and/or coaxial cable ports for transmission of data through the appropriate connection. In some embodiments, connection 120 may be a Digital Subscriber Line (DSL), an Asymmetric Digital Subscriber Line (ADSL), or a cable connection. The communication links could be wireless links or wired links or any combination consistent with embodiments of the present invention that allows communication between the various devices.

Network 140 could include a Local Area Network (LAN), a Wide Area Network (WAN), or the Internet. In some embodiments, information sent over network 140 may be encrypted to ensure the security of the data being transmitted. Printer 170 may be connected to network 140 through connection 120. In some embodiments, printer 170 may also be connected directly to computing device 110 and/or server 130. System 100 may also include other peripheral devices (not shown), according to some embodiments of the present invention. A computer software application consistent with the present invention may be deployed on any of the exemplary computers, as shown in FIG. 1. For example, computing device 110 could execute software that may be downloaded directly from server 130. Portions of the application may also be executed by printer 170 in accordance with some embodiments of the present invention.

FIG. 2 shows a high-level block diagram of exemplary printer 170. In some embodiments, printer 170 may contain bus 174 that couples CPU 176, firmware 171, memory 172, input-output ports 175, print engine 177, and secondary storage device 173. Printer 170 may also contain other Application Specific Integrated Circuits (ASICs), and/or Field Programmable Gate Arrays (FPGAs) 178 that are capable of executing portions of an application to print documents according to some embodiments of the present invention. In some embodiments, printer 170 may also be able to access secondary storage or other memory in computing device 110 using I/O ports 175 and connection 120. In some embodiments, printer 170 may also be capable of executing software including a printer operating system and other appropriate application software. In some embodiments, printer 170 may allow paper sizes, output trays, color selections, and print resolution, among other options, to be user-configurable.

In some embodiments, CPU 176 may be a general-purpose processor, a special purpose processor, or an embedded processor. CPU 176 can exchange data including control information and instructions with memory 172 and/or firmware 171. Memory 172 may be any type of Dynamic Random Access Memory (“DRAM”) such as but not limited to SDRAM, or RDRAM. Firmware 171 may hold instructions and data including but not limited to a boot-up sequence, pre-defined routines, and other code. In some embodiments, code and data in firmware 171 may be copied to memory 172 prior to being acted upon by CPU 176. Routines in firmware 171 may include code to translate page descriptions received from computing device 110 to display lists and image bands. In some embodiments, firmware 171 may include rasterization routines to convert display commands in a display list to an appropriate rasterized bit map and store the bit map in memory 172. Firmware 171 may also include compression routines and memory management routines. In some embodiments, data and instructions in firmware 171 may be upgradeable.

In some embodiments, CPU 176 may act upon instructions and data and provide control and data to ASICs/FPGAs 178 and print engine 177 to generate printed documents. In some embodiments, ASICs/FPGAs 178 may also provide control and data to print engine 177. FPGAs/ASICs 178 may also implement one or more of translation, compression, and rasterization algorithms. In some embodiments, computing device 110 can transform document data into a first printable data. Then, the first printable data can be sent to printer 170 for transformation into intermediate printable data. Printer 170 may transform intermediate printable data into a final form of printable data and print according to this final form. In some embodiments, the first printable data may correspond to a PDL description of a document. In some embodiments, the translation process from a PDL description of a document to the final printable data comprising of a series of lower-level printer-specific commands may include the generation intermediate printable data comprising of display lists of objects. In some embodiments, display lists may hold one or more of text, graphics, and image data objects. In some embodiments, objects in display lists may correspond to similar objects in a user document. In some embodiments, display lists may aid in the generation of intermediate printable data. In some embodiments, display lists may be stored in memory 172 or secondary storage 173. Exemplary secondary storage 173 may be an internal or external hard disk, memory stick, or any other memory storage device capable of being used system 200. In some embodiments, the display list may reside one or more of printer 170, computing device 110, and server 130. Memory to store display lists may be a dedicated memory or form part of general purpose memory, or some combination thereof according to some embodiments of the present invention. In some embodiments, memory may be dynamically allocated to hold display lists as needed. In some embodiments, memory allocated to store display lists may be dynamically released after processing.

FIG. 3 shows an exemplary high-level architecture of a system for creating and managing display lists flexibly according to some embodiments of the present invention. As shown in FIG. 3, language server 340, engine server 360, and raster server 320 may communicate with each other. In addition, language server 340, engine server 360, and raster server 320 may invoke routines and communicate with RDL library 330. In some embodiments, the display list may include commands defining data objects and their contexts within a document or a page within the document to be printed. These display commands may include data comprising characters or text, line drawings or vectors, and images or raster data. In some embodiments, the display list may be dynamically reconfigurable and is termed a Reconfigurable Display List (“RDL”).

In some embodiments the translation of a PDL description of a document into a display list representation may be performed by language server 340 using routines in RDL library 330. In some embodiments, language server 340 may take PDL language primitives and transform these into data and graphical objects and add these to the display list using the capability provided by RDL library 330. In some embodiments, access to functions and routines in RDL library 330 may be provided through an Application Programming Interface (“API”). In some embodiments, the display list may be stored and manipulated in a dynamically allocated memory pool such as exemplary RDL memory pool 320. In some embodiments, the display list may be a second or intermediate step in the processing of data prior to actual printing and may be parsed before conversion into a subsequent form. In some embodiments the subsequent form may be a final representation, and the conversion process may be referred to as rasterizing the data. In some embodiments rasterization may be performed by raster server 320. Upon rasterization, the rasterized data may be stored in frame buffer 350, which may be part of memory 172. Print engine 177, may process the rasterized data in frame buffer 350, and form a printable image of the page on a print medium, such as paper. In some embodiments, raster server 320 and engine server 360 may also use routines in RDL library 330 to perform their functions. In some embodiments, engine server 360 may provide control information, instructions, and data to print engine 177. In some embodiments, engine server 360 may free memory used by display list objects after processing for return to RDL memory pool 320. In some embodiments, portions of RDL memory pool and/or frame buffer 350 may reside in memory 172 or secondary storage 173. In some embodiments, routines for language server 340, raster server 320, and engine server 360 may be provided in firmware 171 or may be implemented using ASICs/FPGAs 178.

FIG. 4 shows an exemplary data structure 400 for flexible display lists, according to some embodiments of the present invention. In some embodiments, flexible display lists may take the form of RDLs. In some embodiments, flexible display lists, such as exemplary RDLs, may be stored in a data structure, such as exemplary data structure 400 that facilitates the dynamic manipulation and processing of data objects. In some embodiments, a physical page may comprise one or more virtual or logical pages. In some embodiments, a virtual or logical page may be represented by exemplary vpage data structure 410. Instantiated vpage data structures 410 are referred to as vpages. In some embodiments, exemplary vpage data 410 allows a logical model of a virtual page to be stored and manipulated in memory, such as exemplary memory 172. In some embodiments, vpage data structure 410 may include information specific to a virtual or logical page, including offsets to geometric bands, reference bands, and compression bands.

In some embodiments, a virtual page may further comprise one or more bands, which are also called geometric bands. In some embodiments, the bands may be horizontal bands or sections each holding objects present within that band of the logical page. Accordingly, vpage data structure 410 may include offsets to or reference a linked list comprising of instances of exemplary individual band data structures (also called geometric band data structures) 420. Instantiated band data structures 420 are referred to as bands (or geometric bands). In some embodiments, vpage data structure may further comprise a linked list of instances of exemplary individual band data structures 420. In some embodiments, each vpage data structure 410 may include information about any bands that it references. A band or geometric band relates to a geometrically defined region within the geometrical bounds of a virtual page. Typically this geometric region is rectangular in nature and may be as large as the vpage boundaries, or a sub region of the vpage. In some embodiments, the printable region of a vpage includes all geometric bands contained within that vpage.

The term band data is used to refer to object data within block 440 that is linked to a node data structure 430 that is further linked to band data structure 420. In some embodiments, individual band data structure 420 may contain data objects pertaining to that entire band. In some embodiments each band data structure may include offsets to or reference a linked list of instances of individual node data structures 430. Instantiated node data structures 430 are referred to as nodes. In some embodiments, each band data structure 420 may include information about any nodes that it references.

In some embodiments, individual node data structure 430 may include offsets to or reference any associated (zero or more) blocks of memory (“blocks”) 440. In some embodiments, block 440 may be used to store intermediate graphical objects and other data generated from a PDL description. In some embodiments, language server 340 may generate intermediate graphical objects and other data objects from a PDL description for use by raster server 320. In some embodiments, each node data structure may include information about any blocks that it references. In some embodiments, blocks 440 may be a uniform, fixed size. In this specification, vpage data structure 410, band data structure 420, and node data structure 430 are also referred to as control data structures. In some embodiments, data stored in data structure 400 may also be accessed and edited.

In some embodiments, when a job is received by printer 170 language server 340 may invoke routines from RDL library 330 leading to the instantiations of vpage data structure 410 and an appropriate number of band data structures 420. In some embodiments, the system may maintain a list of free vpages, bands, and nodes in a vpage control data field, a band control data field, and a node control data field, respectively. In some embodiments, an instantiation of another vpage data structure 410 may occur for new virtual pages. In some embodiments, an instantiation of another band data structure may occur for new bands in a virtual page.

In some embodiments, objects generated by language server 340 may be used to build a vpage. A vpage may be rasterized by raster server 320. In some embodiments, a vpage may be rasterized band-by-band until all bands have been processed. In some embodiments, a band may be rasterized object-by-object until all objects have been provided to the rasterizer for that band. In some embodiments, the order in which bands may be rasterized may not be sequential. In some embodiments, multiple finished vpages may exist in the system at any given time. These may include vpages whose rasterization process has not completed. In some embodiments, vpages need not be rasterized in the order of their construction. After rasterization is complete, engine server 360 may invoke routines in RDL library to delete the vpage from memory, and release memory used by that vpage to memory pool 310.

FIG. 5 shows an exemplary organization 500 of RDL memory pool 310 according some embodiments of the present invention. In some embodiments, RDL memory pool 310 may include one or more instances of data structure 400. In some embodiments, memory may be allocated at boot time to RDL memory pool 310. In some embodiments, RDL memory pool 310 may be allocated from memory 172. In some embodiments, the memory allocated at boot time may be a pre-determined amount of memory. Memory allocated to RDL memory pool 310 at boot time is termed the RDL base memory pool 310-1. In some embodiments, RDL memory pool may be allocated by the operating system, or by invoking routines RDL library 330. In some embodiments, additional memory 310-2 may be dynamically allocated and added to the base memory pool during system operation to increase the amount of memory available to RDL memory pool 310. Exemplary RDL memory pool 310 includes base memory pool 310-1 and additional memory pool 310-2. Memory may also be dynamically released from RDL memory pool 310 during system operation and made available for other uses. In some embodiments, routines invoked from RDL library 330 by language server 340, raster server 320, engine server 360, or other system components may use memory or access data structures specifically allocated to RDL memory pool 310.

At boot-time, one or more instances of global structure 520 may be allocated for use by RDL library 330 and other system components. In some embodiments, only a single instance of the global structure 520 may be allocated. In some embodiments, global structure 520 may include fields for management of RDL memory pool 310. For example, global structure 520 may include fields for offsets indicating the start of each memory pool; lists of blocks, nodes, bands, and vpages; offsets to various buffers, including compression, decompression, and compaction; and/or semaphores or other access co-ordination structures to synchronize access to RDL's memory by various system components.

In some embodiments, the starting address of RDL base memory pool 310-1 may be used to allocate an instance of global structure 520. In some embodiments, zero or more buffers 530 may also be allocated in RDL memory pool 310. In some embodiments, buffers 530 may be used for compression. Global structure 520 may include offsets to or reference buffers 530 as well as fields that contain information about buffers 530. In some embodiments, memory remaining in RDL memory pool 310 after allocation of RDL global structure 520 may be divided into a number of segments 540. In some embodiments, all segments 540 may be of a predetermined and/or fixed size. In some embodiments, the size of memory block 440 may correspond to the size of segment 540. Some memory 550 may remain after the allocation of segments.

In some embodiments, segment 540 may hold any control data structure such as vpage data structure 410, band data structure 420, or node data structure 430. In some embodiments, segment 540 may also hold block 440. In some embodiments, segments 540 are dynamically allocated using unallocated segments in RDL memory pool 310. In some embodiments, certain segments 540 may be used as a temporary working area at various points in time by routines in RDL library 330. For example, some segments 540 may be used as working area to rasterize data associated with one or more nodes. In some embodiments, the system may ensure the availability of adequate segments 540 for any working areas prior to allocating additional segments 540 from available unallocated memory in RDL memory pool 310.

FIG. 6 shows a flowchart for an exemplary algorithm 600 to build a virtual page. In step 610, a new vpage data structure 410 may be instantiated, and one or more new band data structures 420 may be associated with the vpage. In some embodiments, a list of free segments may be used to determine which blocks to allocate to vpages and bands. In step 620, algorithm 600 determines whether additional objects are being added to the vpage. If additional objects are being added to the vpage, then algorithm 600 determines if adequate space is available to add the object in step 650. In some embodiments, information pertaining to blocks 440 associated with the last instantiated node may be used to determine if adequate space is available to add the object. If space is available then the object may be added to the block 440 in step 640 and the algorithm returns to step 620.

If the space available is inadequate, then in step 660, the algorithm determines if there are sufficient contiguous segments 540 available to hold the object. If there are enough contiguous segments 540, then, in step 670, needed segments 540 can be allocated, a new node may be instantiated and associated with the current band, and needed blocks 440 may then be associated with the node. Next, in step 640, the object is added to newly allocated blocks 440 and the algorithm returns to step 620.

In step 665, if there is enough memory available to hold the object, but an inadequate number of contiguous segments 540, then, in step 690, compaction may be used to create an adequate number of contiguous segments 540. In step 670, needed segments 540 may then be allocated, a new node may be instantiated and associated with the current band, and needed blocks 440 may then be associated with the node. Next, in step 640, the object is added to newly allocated blocks 440 and the algorithm returns to step 620.

In step 665, if not enough memory is available to hold the object, then, in step 680 blocks may be freed by waiting, and/or requesting additional memory pool 310-2, and/or using one or more memory recovery schemes. In step 670, needed segments 540 may then be allocated, a new node may be instantiated and associated with the current band, and needed blocks 440 may then be associated with the node. Next, in step 640, the new data object is added to newly allocated blocks 440 and the algorithm returns to step 620.

In step 620, if there are no additional objects to be added, then, in step 630, the vpage has been built. The algorithm returns to step 610, where it may commence the building of new vpage.

In some embodiments, language server 340 may generate some of the graphical or other data objects, and invoke routines in RDL library 330 to place the generated objects in an RDL. In some embodiments, routines invoked in RDL library 330 may instantiate a node data structure 430 and associate the node with an appropriate band. Invoked routines may also allocate one or more blocks 440 and associate the allocated blocks 440 with the new node. The graphical or data object generated by language server 340 may then be copied into one or more of the blocks 440 associated with the node. In some embodiments, additional graphical or data objects that may be generated by language server 340 may also be copied into appropriate available associated blocks 440. In some embodiments, if blocks 440 associated with a node are full, then a new node may be allocated, associated with the appropriate band, and blocks 440 available in memory pool 310 may be associated to the new node. The object may then be copied into one or more new blocks 440.

In some embodiments, language server 340 may determine the available space in blocks 440 associated with a node and generate data or graphical objects that fit into the available space. In some embodiments, a large object may be broken up into two or more smaller objects to allow one of the smaller objects to be placed in the unused space available in a block 440 associated with a node. In some embodiments, large objects that do not fit into a single block 440 may be placed in multiple contiguous blocks 440 associated with a node. In some embodiments, unused blocks 440 may be rearranged and joined in memory to create a single larger contiguous block associated with a specific node.

In some embodiments, band data structure 420, which contains information about a band related to a vpage, may include region independent data. For example, band data may include objects that may have geometric boundaries that extend beyond the bounds of a particular geometric band. In some instances, band data may include objects that are used multiple times. In some embodiments, when data or graphical objects generated by language server 340 span geometric band boundaries the objects may be stored in a separate reference band and an offset to the location of the object in the reference band is stored at an appropriate location in that specific geometric band. In some embodiments, objects that are repeatedly used in a virtual page, or a document, may also be stored in a reference band. The use of reference bands to store repeatedly used objects optimizes memory utilization and allows a repeatedly used object to be placed in a reference band once, but used multiple times across bands.

In some embodiments, if all available blocks 440 have been allocated, routines in RDL library may be invoked to free some allocated memory blocks 440. In some embodiments, an algorithm to free allocated memory blocks may be terminated if an adequate number of blocks are made available. In some embodiments, the system may simply wait for vpages to be deleted and for memory to be freed. In some embodiments, the system may cycle through all vpages and free blocks 440 that are in both memory pool 310 and in an alternate storage location. For example, blocks 440 that have been written to secondary storage 173 may be freed.

In some embodiments, the system may also execute one or more memory recover strategies. Memory recover strategies may include compression of data blocks. In some embodiments, data blocks may be compressed using zlib, gzip, or a host of other well-known lossless data compression algorithms. In some embodiments, buffers 530 may be used as working areas during compression. In some embodiments, for each band in a vpage, all data blocks may be compressed and the compressed data may be placed in a compression band that is associated with the vpage. In some embodiments, node data structure 430 may be referenced by geometric bands, reference bands, compression bands, or any other band-like structure. Compression bands may contain associated node linked block data in a compressed state. In some embodiments, blocks 440 freed as a result of the compression may be returned to memory pool 310 for subsequent allocation. In some embodiments, after blocks 440 associated with a node have been compressed, a field in the node is updated to indicate the location of compressed data in the compression band. In some embodiments, a node's compressed data may be deleted when the entire vpage has been processed.

In some embodiments, images in the data blocks of a vpage may be compressed. In some embodiments, the images may be compressed in-place. In some embodiments, buffers 530 may be used as working areas during compression. Compressed images occupy less space consequently objects in the band following the image may be copied into areas freed by image compression. In some embodiments, image compression followed by the copying of subsequent objects into areas freed by image compression may result in the freeing of blocks 440, which are then returned to memory pool 310 for allocation. In some embodiments, image compression may be accomplished by using well-known image compression algorithms such as JPEG. In some embodiments, if images in a vpage are compressed, all future images may also be compressed before being put into a display list to maintain uniform image quality and prevent differences between different images. In some embodiments, a compression factor such as the Q factor in JPEG may be specified and/or varied. In some embodiments, at rasterization time, each image may be decompressed. In some embodiments, lossless image compression techniques may be utilized.

In some embodiments, all blocks 440 associated with nodes in a vpage may be stored in secondary storage 173. In some embodiments, storage on computing device 110 or server 130 may be used. In some embodiments, a file system, DMA storage, compressed memory, USB flash drives, memory sticks, hard disks, or some other type of memory may be used. In some embodiments, all blocks 440 associated with a node may be written to a file system in a serial manner. After blocks 440 have been stored, they may be deleted and any freed memory returned to memory pool 310. During rasterization stored blocks 440 may be copied back to memory pool 310. In some embodiments, copies of stored blocks 440 in secondary storage 173 may be deleted when the vpage is no longer needed.

In some embodiments, product-specific functions may also be implemented to increase memory available to RDL memory pool 310. For example, printer 170 may be able to request that additional memory, such as exemplary memory 310-2, be added to RDL memory pool 310 using memory currently outside RDL memory pool 310. In some embodiments, a vpage being built by language server 340 may be pre-rasterized. “Pre-rasterization” is a process by which the graphical objects in blocks 440 in a vpage are rasterized and stored in frame buffer 350. Blocks 440 that have been pre-rasterized may then be freed. In some embodiments, after completion of a pre-rasterization cycle, additional (non pre-rasterized) blocks 440 in the partially-rasterized vpage may continue to be built. In some embodiments, multiple pre-rasterization cycles may be applied to a vpage during construction of that vpage.

In some embodiments, a program for conducting the above process can be recorded on computer-readable media 150 or computer-readable memory. These include, but are not limited to, Read Only Memory (ROM), Programmable Read Only Memory (PROM), Flash Memory, Non-Volatile Random Access Memory (NVRAM), or digital memory cards such as secure digital (SD) memory cards, Compact Flash™, Smart Media™, Memory Stick™, and the like. In some embodiments, one or more types of computer-readable media may be coupled to printer 170. In certain embodiments, portions of a program to implement the systems, methods, and structures disclosed may be delivered over network 140.

Other embodiments of the present invention will be apparent to those skilled in the art from consideration of the specification and practice of one or more embodiments of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims

1-22. (canceled)

23. A method for managing display lists in a memory pool, wherein the memory pool comprises a plurality of uniformly sized segments with at least one control data structure the method comprising:

adding a display list object to a block that was most recently associated with the control data structure, if sufficient unused memory is available within the block to hold the display list object;
adding the display list object to a new block associated with the control data structure and created by using an additional segment, if the size of the display list object exceeds the size of unused memory within the block that was most recently associated with the control data structure;
adding the display list object to a new block associated with the control data structure and created from two or more unused contiguous segments, if the size of the display list object exceeds the size of a segment; and
adding the display list object to a new block associated with the control data structure and created by applying at least one of a plurality of memory recovery strategies, if the combination of unused contiguous segments is insufficient to hold the display list object.

24. The method of claim 23, wherein the plurality of memory recovery strategies comprise of:

waiting for memory blocks to be freed;
adding additional available free memory to the memory pool;
compacting blocks associated with the control data structure for the memory pool;
freeing blocks associated with control data structures in the memory pool that are also stored at an alternate location;
compressing display list objects in blocks associated with control data structures in memory; and
storing previously existing display list objects in secondary storage.

25. The method of claim 23, wherein the size of the segments can be chosen from one of several pre-determined sizes.

26. The method of claim 23, wherein adding the display list object to a new block associated with the control data structure and created by combining a two or more unused contiguous segments comprises:

determining the number of contiguous segments needed to hold the display list object;
associating a new block comprising the combined unused contiguous segments with the control data structure; and
adding the display list object to the new block.

27. The method of claim 24, wherein waiting for memory blocks to be freed comprises one of:

waiting until an adequate number of memory blocks sufficient to hold the display list object have been freed; and
terminating the wait if the size of the display list object exceeds currently available memory including any freed memory blocks and it is unlikely that any further memory blocks will be freed.

28. The method of claim 24, wherein waiting for memory blocks to be freed comprises associating a new block comprising of one or more memory segments resulting from freed memory blocks with the control data structure, wherein the memory blocks are freed during the waiting period.

29. The method of claim 24, wherein adding additional available free memory to the memory pool comprises:

allocating additional available free memory not currently within the memory pool to increase the size of the memory pool; and
associating a new block comprising of one or more memory segments resulting from the additionally available memory with the control data structure.

30. The method of claim 29, further comprising deallocating the additional allocated memory after use.

31. The method of claim 29, wherein the allocation of the additional available free memory to the memory pool is dynamic and occurs at execution time.

32. The method of claim 24, wherein compacting blocks associated with the control data structure in the memory pool comprises:

rearranging memory blocks to form a set of contiguous segments; and
associating a new block comprising of one or more of the contiguous memory segments resulting from the rearrangement of memory blocks with the control data structure.

33. The method of claim 24, wherein freeing blocks associated with control data structures in the memory pool that are also stored at an alternate location comprises:

deleting blocks with duplicated display list objects;
freeing blocks that held the display list objects; and
associating a new block comprising of one or more memory segments resulting from freed memory blocks with the control data structure.

34. The method of claim 24, wherein compressing display list objects in blocks associated with control data structures in memory comprises one or more of the steps of:

compressing display list objects in blocks associated with the control data structure;
storing each compressed display list object in a compression band;
storing information pertaining to the compression bands at locations in the control data structure associated with the blocks that held display list objects corresponding to the compression bands;
freeing blocks that held the display list objects; and
associating a new block comprising of one or more memory segments resulting from freed memory blocks with the control data structure.

35. The method of claim 34, wherein compressing display list objects in blocks associated with the control data structure comprises one or more of the steps of:

compressing data in blocks associated with the control data structure; and
compressing image data in blocks associated with the control data structure.

36. The method of claim 35, wherein data is compressed using one of ZLIB compression, GZIP compression, or any other lossless data compression algorithm.

37. The method of claim 35, wherein image data is compressed using JPEG compression, with a specified Q-factor.

38. The method of claim 23, wherein the memory pool is resident on a printing device.

39. The method of claim 24, wherein storing previously existing display list objects in secondary storage comprises:

storing one or more display list objects at locations on a secondary storage device;
storing address information for the stored display list objects at locations in the control data structure associated with the blocks that held the stored display list objects;
freeing blocks that held the stored display list objects; and
associating a new block comprising of one or more memory segments resulting from freed memory blocks with the control data structure.

40. A computer-readable medium that stores instructions, which when executed by a processor perform a method for managing display lists in a memory pool, wherein the memory pool comprises a plurality of uniformly sized segments with at least one control data structure the method comprising:

adding a display list object to a block that was most recently associated with the control data structure, if sufficient unused memory is available within the block to hold the display list object;
adding the display list object by using an additional segment, if the size of the display list object exceeds the size of unused memory within the block that was most recently associated with the control data structure;
adding the display list object by combining a two or more unused contiguous segments, if the size of the display list object exceeds the size of a segment; and
adding the display list object to a new block associated with the control data structure and created by applying at least one of a plurality of memory recovery strategies, if the combination of unused contiguous segments is insufficient to hold the display list object.

41. The method of claim 40, wherein the plurality of memory recovery strategies comprise of:

waiting for memory blocks to be freed;
adding additional available free memory to the memory pool;
compacting blocks associated with the control data structure for the memory pool;
freeing blocks associated with control data structures in the memory pool that are also stored at an alternate location;
compressing display list objects in blocks associated with control data structures in memory; and
storing previously existing display list objects in secondary storage.

42. A computer readable memory containing instructions for controlling a processor to perform a method of managing display lists in a memory pool, wherein the memory pool comprises a plurality of uniformly sized segments with at least one control data structure the method comprising:

adding a display list object to a block that was most recently associated with the control data structure, if sufficient unused memory is available within the block to hold the display list object;
adding the display list object by using an additional segment, if the size of the display list object exceeds the size of unused memory within the block that was most recently associated with the control data structure;
adding the display list object by combining a two or more unused contiguous segments, if the size of the display list object exceeds the size of a segment; and
adding the display list object to a new block associated with the control data structure and created by applying at least one of a plurality of memory recovery strategies, if the combination of unused contiguous segments is insufficient to hold the display list object.

43. The method of claim 42, wherein the plurality of memory recovery strategies comprise of:

waiting for memory blocks to be freed;
adding additional available free memory to the memory pool;
compacting blocks associated with the control data structure for the memory pool;
freeing blocks associated with control data structures in the memory pool that are also stored at an alternate location;
compressing display list objects in blocks associated with control data structures in memory; and
storing previously existing display list objects in secondary storage.
Patent History
Publication number: 20070229900
Type: Application
Filed: Aug 31, 2006
Publication Date: Oct 4, 2007
Applicant:
Inventors: Stuart Guarnieri (Laramie, WY), Tim Prebble (Longmont, CO)
Application Number: 11/515,337
Classifications
Current U.S. Class: Page Or Frame Memory (358/1.17); Memory (358/1.16)
International Classification: G06K 15/00 (20060101);