METHOD AND SYSTEM FOR SCALING 3D VIDEO
A method and system are provided in which an integrated circuit (IC) comprises multiple devices that may be selectively interconnected to route and process 3D video data. The IC may be operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from memory, and selectively interconnect one or more of the devices based on the determination. The selective interconnection may be based on input and output formats of the 3D video data, and on a scaling factor. The input format may be a left-and-right (L/R) format or an over-and-under (O/U) format. Similarly, the output format may be a L/R format or an O/U format. The selective interconnection may be based on input and output pixel rates of the 3D video data. Moreover, the selective interconnection may be determined on a picture-by-picture basis.
This application makes reference to, claims priority to, and claims the benefit of:
U.S. Provisional Patent Application Ser. No. 61/267,729 (Attorney Docket No. 20428US01) filed on Dec. 8, 2009;
U.S. Provisional Patent Application Ser. No. 61/296,851 (Attorney Docket No. 22866US01) filed on Jan. 20, 2010; and
U.S. Provisional Patent Application Ser. No. 61/330,456 (Attorney Docket No. 23028US01) filed on May 3, 2010.
This application also makes reference to:
U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 20428U502) filed on Dec. 8, 2010;
U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 23438U502) filed on Dec. 8, 2010;
U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 23439U502) filed on Dec. 8, 2010; and
U.S. Provisional Patent Application Ser. No. ______ (Attorney Docket No. 23440U502) filed on Dec. 8, 2010.
Each of the above referenced applications is hereby incorporated herein by reference in its entirety.
FIELD OF THE INVENTIONCertain embodiments of the invention relate to processing of three-dimensional (3D) video. More specifically, certain embodiments of the invention relate to a method and system for scaling 3D video.
BACKGROUND OF THE INVENTIONThe availability and access to 3D video content continues to grow. Such growth has brought about challenges regarding the handling of 3D video content from different types of sources and/or the reproduction of 3D video content on different types of displays.
Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings.
BRIEF SUMMARY OF THE INVENTIONA system and/or method for scaling 3D video, as set forth more completely in the claims.
Various advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.
Certain embodiments of the invention may be found in a method and system for scaling 3D video. Various embodiments of the invention relate to an integrated circuit (IC) comprising multiple devices that may be selectively interconnected to route and process 3D video data. The IC may be operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory, and selectively interconnect one or more of the devices based on the determination. The selective interconnection may be based on input and output formats of the 3D video data, and on a scaling factor. The input format may be a left-and-right (L/R) format or an over-and-under (O/U) format. Similarly, the output format may be a L/R format or an O/U format. The selective interconnection may be based on input and output pixel rates of the 3D video data. Moreover, the selective interconnection may be determined on a picture-by-picture basis.
The SoC 100 may generate one or more output signals that may be provided to one or more output devices for display, reproduction, and/or storage. For example, output signals from the SoC 100 may be provided to display devices such as cathode ray tubes (CRTs), liquid crystal displays (LCDs), plasma display panels (PDPs), thin film transistor LCDs (TFT-LCDs), plasma, light emitting diode (LED), Organic LED (OLED), or other flatscreen display technology. The characteristics of the output signals, such as pixel rate and/or resolution, for example, may be based on the type of output device to which those signals are to be provided.
The host processor module 120 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of the SoC 100. For example, parameters and/or other information, including but not limited to configuration data, may be provided to the SoC 100 by the host processor module 120 at various times during the operation of the SoC 100. The memory module 130 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to store information associated with the operation of the SoC 100. For example, the memory module 130 may store intermediate values that result during the processing of video data, including those values associated with 3D video data processing.
The SoC 100 may comprise an interface module 102, a video processor module 104, and a core processor module 106. The SoC 100 may be implemented as a single integrated circuit comprising the components listed above. The interface module 102 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive multiple signals that comprise video content. Similarly, the interface module 102 may be operable to communicate one or more signals comprising video content to output devices communicatively coupled to the SoC 100.
The video processor module 104 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to process video data associated with one or more signals received by the SoC 100. The video processor module 104 may be operable to support multiple video data formats, including multiple input formats and multiple output formats for 3D video data. The video processor module 104 may be operable to perform various types of operations on 3D video data, including but not limited to format conversion and/or scaling. In some embodiments, when the video content comprises audio data, the video processor module 104, and/or another module in the SoC 100, may be operable to handle the audio data.
The core processor module 106 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of the SoC 100. For example, the core processor module 106 may be operable to control and/or configure operations of the SoC 100 that are associated with processing video content, including but not limited to the processing of 3D video data. In this regard, the core processor 106 may be operable to determine and/or calculate parameters associated with the processing of 3D video data that may be utilized to configure and/or operate the video processor module 104. In some embodiments of the invention, the core processor module 106 may comprise memory (not shown) that may be utilized in connection with the operations performed by the SoC 100. For example, the core processor module 106 may comprise memory that may be utilized during 3D video data processing by the video processor module 104.
In operation, the SoC 100 may receive one or more signals comprising 3D video data through the interface module 102. When the 3D video data received in those signals is to be scaled, the video processor module 104 and/or the core processor module 106 may be utilized to determine whether to scale 3D video data in the video processor module 104 before the 3D video data is captured to memory through the video processor module 104 or after the captured 3D video data is retrieved from the memory through the video processor module 104. The memory into which the 3D video data is to be stored and from which it is to be subsequently retrieved may be a dynamic random access memory (DRAM) that may be part of the memory module 130 and/or of the core processor module 106, for example.
At least a portion of the video processor module 104 may be configured by the host processor module 120 and/or the core processor module 106 according to the determined order in which to scale the 3D video data. Such order may be based on an input format of the 3D video data, an output format of the 3D video data, and on a scaling factor. Moreover, the order in which to scale the 3D video data may be determined on a picture-by-picture basis. That is, the order in which to scale the 3D video data and the corresponding configuration of the video processor module 104 may be carried out for each picture in a video sequence that is received in the SoC. Once processed, the 3D video data may be communicated to one or more output devices by the SoC 100.
As indicated above, the SoC 100 may be operable handle 3D video data in multiple input formats and multiple output formats. The complexity of the SoC 100, however, may increase significantly the larger the number of input and output formats supported. An approach that may simplify the SoC 100 and that may enable support for a large number of formats is to convert an input format into one of a subset of formats supported by the SoC for processing and have the SoC 100 perform the processing of the 3D video data in that format. Once the processing is completed, the processed 3D video data may be converted to the appropriate output format if such conversion is necessary.
Both the first format 200 and the second format 210 may be utilized by the SoC 100 described above to process 3D video data and may be referred to as native formats of the SoC 100. When 3D video data is received in one of the multiple input formats supported by the SoC 100, the SoC 100 may convert that input format to one of the first format 200 and the second format 210, if such conversion is necessary. The SoC 100 may then process the 3D video data in a native format. Once the 3D video data is processed, the SoC 100 may convert the processed 3D video data into one of the multiple output formats supported by the SoC 100, if such conversion is necessary. The SoC 100 may also be operable to process 3D video data in the sequential format, which is typically handled by the SoC 100 in a manner that is substantially similar to the handling of the second format 210.
Referring to
Referring to
Referring to
Referring to
The conversion operations supported by the SoC 100 may also comprise converting from the first format 200 to the second format 210 and converting from the second format 210 to the first format 200. In this manner, 3D video data may be received in any one of multiple input formats, such as the input formats 202a, 204a, 206a, 212a, 214a, and 216a (
The various input formats and output formats described above with respect to
In the embodiment of the invention described in
Each of the crossbar modules 310a and 310b may comprise multiple input ports and multiple output ports. The crossbar modules 310a and 310b may be configured such that any one of the input ports may be connected to one or more of the output ports. The crossbar modules 310a and 310b may enable pass-through connections 316 between one or more output ports of the crossbar module 310a and corresponding input ports of the crossbar module 310b. Moreover, the crossbar modules 310a and 310b may enable feedback connections 318 between one or more output ports of the crossbar module 310b and corresponding input ports of the crossbar module 310a. The configuration of the crossbar modules 310a and/or 310b may result in one or more processing paths being configured within the processing network 300 in accordance with the manner and/or order in which video data is to be processed.
The MFD module 302 may be operable to read video data from memory and provide such video data to the crossbar module 310a. The video data read by the MFD module 302 may have been stored in memory after being generated by an MPEG encoder (not shown). Each VFD module 304 may be operable to read video data from memory and provide such video data to the crossbar module 310. The video data read by the VFD module 304 may have been stored in memory in connection with one or more operations and/or processes associated with the processing network 300. The HDMI module 306 may be operable to provide a live feed of high-definition video data to the crossbar module 310a. The HDMI module 306 may comprise a buffer (not shown) that may enable the HDMI module 306 to receive the live feed at one data rate and provide the live feed to the crossbar module 310a at another data rate.
Each SCL module 308 may be operable to scale video data received from the crossbar module 310a and provide the scaled video data to the crossbar module 310b. The MAD module 312 may be operable to perform motion-adaptive deinterlacing operations on interlaced video data received from the crossbar module 310a, including operations related to inverse telecine (IT), and provide progressive video data to the crossbar module 310b. The DNR module 314 may be operable to perform artifact reduction operations on video data received from the crossbar module 310a, including block noise reduction and mosquito noise reduction, for example, and provide the noise-reduced video data to the crossbar module 310b. In some embodiments of the invention, the operations performed by the DNR module 314 may be utilized before the operations of the MAD module 312 and/or the operations of the SCL module 308.
Each CAP module 320 may be operable to capture video data from the crossbar module 310b and store the captured video data in memory. Each CMP module 322 may be operable to blend or combine video data received from the crossbar module 310b with graphics data. For example,
Referring to
Referring to
By configuring the processing network 300 and/or one or more of the SCL modules 308, the processing network 300 may be utilized to scale and/or process 3D video data received by the SoC 100 in any one of the multiple input formats supported by the SoC 100, such as those described above with respect to
When the picture 400 or the picture 410 is associated with an output format, such as after the 3D video data is scaled and/or processed by the processing network 300, the variables may be described as follows: xtot=oxtot is the total width of the picture, ytot=oytot is the total height of the picture, xact=oxact is the active width of the picture, yact=oyact is the active height of the picture, x=ox is the width of the area on the display that the input content is to be displayed, and y=oy is the height of the area on the display that the input content is to be displayed.
Based on the variables described in
When 3D video data received by the SoC 100 is scaled utilizing the processing network 300, the order in which the scaling of the 3D video data occurs with respect to the operations provided by the CAP module 320 and the VFD module 304 may depend on the characteristics of the input format of the 3D video data, the output format of the 3D video data, and the scaling that is to take place. In this regard, there may be bandwidth considerations when determining the appropriate order in which to carry out the scaling of the 3D video data, and consequently, the appropriate configuration of the processing network 300. Below are provided various scenarios that describe the selection of the order or positioning of the scaling operation in a sequence of operations that may be performed on 3D video data by the processing network 300.
In the first configuration 500, the pixel rate at node “A”, p_rateA, is the same as the input pixel rate of the SCL module 308, SCLin. The output pixel rate of the SCL module 308 is SCLout=SCLin·sx·sy=p_rateA·sx·sy. Moreover, the pixel rate at node “C”, p_rateC, is associated with the output characteristics of the 3D video data.
With respect to the CAP module 320 in the first configuration 500, the real time scheduling, cap_rts1, is based on the number of requests for a line of data, n_req, and a time available for all requests, t_n_reqs. With L and R captured separately, these variables may be determined as follows:
where ox is the width of the area on the display that the input content is to be displayed as indicated above with respect to
With respect to the VFD module 304 in the first configuration 500, the real time scheduling, vfd_rts1, is based on the number of requests for a line of data, n_req, and a time available for all requests, t n_reqs. With L and R captured separately, these variables may be determined as follows:
where NV is the burst size of the VFD module 304 in number of pixels.
Referring to
In the second configuration 510, the pixel rate at node “C”, p_rateC, may be the same as the output pixel rate of the SCL module 308, SCLout. The input pixel rate of the SCL module 308 may be SCLin=SCLout/(sx·sy)=p_rateC/(sx·sy). Moreover, the pixel rate at node “A”, p_rateA, may be associated with the input characteristics of the 3D video data.
With respect to the CAP module 320 in the second configuration 510, the real time scheduling, cap_rts2, is based on the number of requests for a line of data, nreq, and a time available for all requests, t n_reqs. With L and R captured separately, these variables may be determined as follows:
where ix is the width of the area of the picture that is to be cropped and displayed as indicated above with respect to
With respect to the VFD module 304 in the second configuration 510, the real time scheduling, vfd_rts2, is based on the number of requests for a line of data, n_req, and a time available for all requests, t_n_reqs. With L and R captured separately, these variables may be determined as follows:
A decision or selection as to whether to perform the scaling operation before capture, as in the first configuration 500, or after the captured data is retrieved from memory, as in the second configuration 510, may be based on the bandwidths associated with both scenarios. For the case when the burst size of the CAP module 320 and the burst size of the VFD module 304 are the same (i.e., Nc=Nv=N), the bandwidth calculations may be determined as follows:
where BW1 is the bandwidth associated with the first configuration 500, BW2 is the bandwidth associated with the second configuration 510, and λ is the ratio of the two bandwidths. When λ<1, the SCL module 308 is to be positioned before the CAP module 320, as in the first configuration 500, and when λ>1, the SCL module 308 is to be positioned after the VFD module 304, as in the second configuration 510.
With respect to the CAP module 320 in the third configuration 600, the real time scheduling, cap_rts3, may be determined as follows:
With respect to the VFD module 304 in the third configuration 600, the real time scheduling, vfd_rts3, may be determined as follows:
where the pixel rate at node “D”, p_rateD, may be associated with the output characteristics of the 3D video data.
Referring to
With respect to the CAP module 320 in the fourth configuration 610, the real time scheduling, cap_rts4, may be determined as follows:
With respect to the VFD module 304 in the fourth configuration 610, the real time scheduling, vfd_rts4, may be determined as follows:
where the pixel rate at node “D”, p_rateD, may be the same as the output pixel rate of the SCL module 308, SCLout.
A decision or selection as to whether to perform the scaling operation before capture, as in the third configuration 600, or after the captured data is retrieved from memory, as in the fourth configuration 610, may be based on the bandwidths associated with both scenarios. For the case when the burst size of the CAP module 320 and the burst size of the VFD module 304 are the same (i.e., NC=NV=N), the following ratio may be determined:
where BW1 is the bandwidth associated with the third configuration 600, BW2 is the bandwidth associated with the fourth configuration 610, and λ is the ratio of the two bandwidths. When λ<1, the SCL module 308 is to be positioned before the CAP module 320, as in the third configuration 600, and when λ>1, the SCL module 308 is to be positioned after the VFD module 304, as in the fourth configuration 610.
With respect to the CAP module 320 in the fifth configuration 700, the real time scheduling, cap may be determined as follows:
where the pixel rate at node “B”, p_rateB, may be the same as the input pixel rate of the SCL module 308, SCLin.
With respect to the VFD module 304 in the fifth configuration 700, the real time scheduling, vfd_rts5, may be determined as follows:
Referring to
With respect to the CAP module 320 in the sixth configuration 710, the real time scheduling, cap_rtss, may be determined as follows:
where the pixel rate at node “B”, p_rateB, may be associated with the input characteristics of the 3D video data.
With respect to the VFD module 304 in the sixth configuration 710, the real time scheduling, vfd_rtss, may be determined as follows:
A decision or selection as to whether to perform the scaling operation before capture, as in the fifth configuration 700, or after the captured data is retrieved from memory, as in the sixth configuration 710, may be based on the bandwidths associated with both scenarios. For the case when the burst size of the CAP module 320 and the burst size of the VFD module 304 are the same (i.e., Nc=Nv=N), the following ratio may be determined:
where BW1 is the bandwidth associated with the fifth configuration 700, BW2 is the bandwidth associated with the sixth configuration 710, and λ is the ratio of the two bandwidths. When λ<1, the SCL module 308 is to be positioned before the CAP module 320, as in the fifth configuration 700, and when λ>1, the SCL module 308 is to be positioned after the VFD module 304, as in the sixth configuration 710.
With respect to the CAP module 320 in the seventh configuration 800, the real time scheduling, cap may be determined as follows:
With respect to the VFD module 304 in the seventh configuration 800, the real time scheduling, vfd_rts7, may be determined as follows:
Referring to
With respect to the CAP module 320 in the eighth configuration 810, the real time scheduling, cap may be determined as follows:
With respect to the VFD module 304 in the eighth configuration 810, the real time scheduling, vfd_rtss, may be determined as follows:
A decision or selection as to whether to perform the scaling operation before capture, as in the seventh configuration 800, or after the captured data is retrieved from memory, as in the eighth configuration 810, may be based on the bandwidths associated with both scenarios. For the case when the burst size of the CAP module 320 and the burst size of the VFD module 304 are the same (i.e., Nc=Nv=N), the following ratio may be determined:
where BW1 is the bandwidth associated with the seventh configuration 800, BW2 is the bandwidth associated with the eighth configuration 810, and λ is the ratio of the two bandwidths. When λ<1, the SCL module 308 is to be positioned before the CAP module 320, as in the seventh configuration 800, and when λ>1, the SCL module 308 is to be positioned after the VFD module 304, as in the eighth configuration 810.
Referring to
At step 1130, the video processor module 104 and/or the host processor module 120 may configure a portion of the video processor module 104 comprising a processing network, such as the processing network 300 shown in
At step 1230, the video processor module 104 and/or the host processor module 120 may configure a portion of the video processor module 104 comprising a processing network, such as the processing network 300 shown in
Various embodiments of the invention relate to an integrated circuit, such as the SoC 100 described above with respect to
The integrated circuit may be operable to determine the selective interconnection of the one or more of devices based on an input format of the 3D video data, an output format of the 3D video data, and a scaling factor. The input format of the 3D video data may be a L/R input format or an O/U input format and the output format of the 3D video data may be a L/R output format or an O/U output format. The integrated circuit may be operable to determine the selective interconnection of the one or more devices based on an input pixel rate of the 3D video data and on an output pixel rate of the 3D video data. The integrated circuit may be operable to determine the selective interconnection of the one or more devices on a picture-by-picture basis.
The selectively interconnected devices in the integrated circuit may be operable to horizontally scale the 3D video data and to vertically scale the 3D video data. Moreover, the selectively interconnected devices in the integrated circuit may be operable to perform one or more operations on the 3D video data before the 3D video data is scaled, after the 3D video data is scale, or both.
In another embodiment of the invention, a non-transitory machine and/or computer readable storage and/or medium may be provided, having stored thereon a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for scaling 3D video.
Accordingly, the present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements may be spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.
Claims
1. A method, comprising:
- in an integrated circuit operable to selectively route and process 3D video data, the integrated circuit comprising a plurality of devices that are operable to be selectively interconnected to enable the routing and the processing: determining whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory; and selectively interconnecting one or more of the plurality of devices based on the determination.
2. The method of claim 1, wherein the selective interconnection of the one or more of the plurality of devices in the integrated circuit is determined based on an input format of the 3D video data, an output format of the 3D video data, and a scaling factor.
3. The method of claim 2, wherein the input format of the 3D video data is a left-and-right input format and the output format of the 3D video data is a left-and-right output format.
4. The method of claim 2, wherein the input format of the 3D video data is a left-and-right input format and the output format of the 3D video data is an over-and-under output format.
5. The method of claim 2, wherein the input format of the 3D video data is an over-and-under input format and the output format of the 3D video data is a left-and-right output format.
6. The method of claim 2, wherein the input format of the 3D video data is an over-and-under input format and the output format of the 3D video data is an over-and-under output format.
7. The method of claim 1, wherein the selective interconnection of the one or more of the plurality of devices in the integrated circuit is determined based on an input pixel rate of the 3D video data and on an output pixel rate of the 3D video data.
8. The method of claim 1, wherein the selective interconnection of the one or more of the plurality of devices in the integrated circuit is determined on a picture-by-picture basis.
9. The method of claim 1, comprising scaling the 3D video data in the selectively interconnected one or more of the plurality of devices in the integrated circuit, the scaling comprising a horizontal scaling and a vertical scaling.
10. The method of claim 1, comprising performing one or more operations in the selectively interconnected one or more of the plurality of devices in the integrated circuit, the one or more operations being performed before the 3D video data is scaled, after the 3D video data is scaled, or both.
11. A system, comprising:
- an integrated circuit operable to selectively route and process 3D video data, the integrated circuit comprising a plurality of devices that are operable to be selectively interconnected to enable the routing and the processing;
- the integrated circuit being operable to determine whether to scale the 3D video data before the 3D video data is captured to memory or after the captured 3D video data is retrieved from the memory; and
- the integrated circuit being operable to selectively interconnect one or more of the plurality of devices based on the determination.
12. The system of claim 11, wherein the integrated circuit is operable to determine the selective interconnection of the one or more of the plurality of devices based on an input format of the 3D video data, an output format of the 3D video data, and a scaling factor.
13. The system of claim 12, wherein the input format of the 3D video data is a left-and-right input format and the output format of the 3D video data is a left-and-right output format.
14. The system of claim 12, wherein the input format of the 3D video data is a left-and-right input format and the output format of the 3D video data is an over-and-under output format.
15. The system of claim 12, wherein the input format of the 3D video data is an over-and-under input format and the output format of the 3D video data is a left-and-right output format.
16. The system of claim 12, wherein the input format of the 3D video data is an over-and-under input format and the output format of the 3D video data is an over-and-under output format.
17. The system of claim 11, wherein the integrated circuit is operable to determine the selective interconnection of the one or more of the plurality of devices based on an input pixel rate of the 3D video data and on an output pixel rate of the 3D video data.
18. The system of claim 11, wherein the integrated circuit is operable to determine the selective interconnection of the one or more of the plurality of devices on a picture-by-picture basis.
19. The system of claim 11, wherein the selectively interconnected one or more of the plurality of devices are operable to horizontally scale the 3D video data and to vertically scale the 3D video data.
20. The system of claim 19, wherein the selectively interconnected one or more of the plurality of devices are operable to perform one or more operations on the 3D video data before the 3D video data is scaled, after the 3D video data is scaled, or both.
Type: Application
Filed: Dec 8, 2010
Publication Date: Jun 9, 2011
Inventors: Darren Neuman (Palo Alto, CA), Jason Herrick (Pleasanton, CA), Qinghua Zhao (Cupertino, CA), Christopher Payson (Bolton, MA)
Application Number: 12/963,014
International Classification: H04N 13/00 (20060101);