Abstract: A colon containing liquid, other tissues containing liquid and other tissue containing air are shown in the drawings. When boundary surfaces of liquid are extracted from the image using CT values of the respective objects and gradients thereof, a boundary surface of the liquid in the colon and boundary surfaces of the liquids contained in the other tissues are extracted. Next, horizontal sections are extracted from the boundary surfaces. As a result, the boundary surfaces of the liquids contained in the other tissues can be eliminated, thereby enabling extraction of only a horizontal plane of the liquid in the colon. Thereafter, only the liquid in the colon and air in the colon in contact with the horizontal plane are identified as regions in the colon.
Abstract: An image processing system using volume data comprising at least one node connected via a network, which is operative to monitor completion of a task property storing condition for storing a task property of a client terminal, collect the task property of a task state of the client terminal and store the collected task property in a state storing server when the task property storing condition is satisfied, read the task property which corresponds to the client terminal from the state storing server, restore the task state of the client terminal by using the read task property in a proxy node which is at least any one of the node and a node which is newly added to the image processing system, and resume processing to be performed on and after the time when the task property storing condition is satisfied.
Abstract: A client terminal which uses image data processed based on an image processing request sent to an image processing server is connected via a network. The communication terminal includes a communication state detecting section which detects the communication state with the image processing server, and an image processing section which detects an abnormality of the communication state and executes processing for the image processing request.
Abstract: When processing is started, first, ROI (region of interest) is determined by user's specification. A small calculation amount image is generated in response to pointing device operation of the user, and the generated small calculation amount image is displayed. Next, a large calculation amount image in the ROI is generated, and the generated large calculation amount image in the ROI is displayed. Next, large calculation amount image in regions other than the ROI is generated, and the generated large calculation amount image in regions other than the ROI is displayed, and then the processing is terminated. At the above steps, if an image change request is made, calculation is performed again each time.
Abstract: A multi-value mask as shown in FIG. 2B2 is applied on a target image shown in FIG. 2B1. The multi-value mask can have a real value corresponding to each voxel; for example, the multi-value mask has real values in the boundary area of the target image like “1, 1, 1, 1, 1, 1, 0.8, 0.6, 0.4, 0.2, 0, 0.” Thus, although jaggies caused by a binary mask are conspicuous in the boundary area of a synthesized image in a related art as shown in FIG. 2A3, synthesized voxel values of the synthesized image become “2, 3, 3, 2, 1, 2, 2.4, 2.4, 1.6, 1, 0, 0” as shown in FIG. 2B3, and jaggies in the boundary area of the target image can be made inconspicuous.
Abstract: A rendering method for generating relation information of image data of three or more dimensions. The rendering method includes preparing an exfoliated picture generated by projecting voxels using cylindrical projection method with a center line, preparing position data of a cross-section and a cross-section image representing the cross-section of the image data, creating a hypothetical cylinder including position data using the center line, calculating relation information associating the hypothetical cylinder and the cross-section using position data of the hypothetical cylinder and position data of the cross-section, and synthesizing the relation information with at least one of the exfoliated picture and the cross-section image to generate a synthesized image.
Abstract: A path 22 representing the center line of a curved cylinder 21 is acquired (step S1). The path 22 can be set by a GUI while a volume rendering image displayed on a display device is viewed. Alternatively, the path 22 can be set automatically when the curved cylinder 21 is designated as a subject of observation. Then, a region of the curved cylinder 21 is extracted as a subject pf observation with the path 22 used as the center (step S2). Then, sections 23 and 24 are generated as if the extracted region was cut open along the path 22 (step S3). In this case, the sections 23 and 24 are curved along the curvature of the curved cylinder 21. Accordingly, CPR images are synthesized on the curved sections (step S4).
Abstract: A back projection method in CT image reconstruction method in which projection data obtained by irradiating a subject with electromagnetic wave and detecting the electromagnetic wave transmitted through the subject are back-projected onto a CT image reconstruction region virtually set on a region of interest of the subject, said back projection method comprising: generating a projected image redundant arrangement form in which the projection data are arranged so as to be redundant and gathered to access units; and performing a vector operation for the projection data arranged in the projected image redundant arrangement form. According to the back projection method of the invention, the vector operation can be performed efficiently while real-number arithmetic operation which is necessary for addressing can be omitted. Accordingly, back projection calculation can be performed speedily.
Abstract: In an image processing system for generating an image of a three dimensional structure using a volume data, having a plurality of nodes coupled via a network, said one of plurality of nodes is constituted in combination with a control portion comprising at least a processor, a memory, and a communication control portion, said control portion is operative to segment each of a plurality of image processing requests into a plurality of jobs in an image processing operation that use the volume data, monitor a calculation resource amount for each of said plurality of nodes on the job accepting side to obtain a calculation resource information, said calculation resource information being calculated from at least any one of a current load factor, a performance record in the past, a node status specification, and a distance to the node on the network, select at least one node on the job accepting side based on said calculation resource information, and transmit one of said segmented jobs to said selected node.
Abstract: An image processing apparatus comprises: a predicted information storage section which stores predicted information that indicates an operation content predicted for an image of a target operation; a predicted image generating portion which generates a predicted image which corresponds to the image of the target operation based on a predicted information; a control section which detects whether input operation content matches the operation contents in the predicted information; and a display control section which displays a predicted image generated by the predicted image generating portion when the control section detects the matching of the operation contents.
Abstract: A method for improving the quality of a projection image. The method includes setting a plurality of rays that pass through voxels. Each ray corresponds to a pixel. The method further includes storing a plurality of pixel values, each associated with one of the voxels existing on each of the rays. The method also stores a plurality of coordinates, each corresponding to one of the pixel values, and determines a noise position in the voxels pixels based on the coordinates.
Abstract: When a decision is made that a part or all of data processing carried out by an operative image data processing unit CS0 is switched to an image data processing unit CS1 as a switching destination, volume data Volume-1 the same as those handled by the operative image data processing unit CS0 is first transmitted from a volume data storage unit SS to the destination image data processing unit CS1, and additional information Mask-1 such as additional information contained in the operative image data processing unit CS0 is then copied to the destination image data processing unit CS1 so that the destination image data processing unit CS1 can execute the data processing.