Program, data processing method, and system of same
An MXF parser thread 43 parses data MXF_D, being, mixed together, a plurality of video data PIC, a plurality of audio data SOU, and system data SYS. Then, it generates video file attribute data VFPD concerning the video based on the parsed system data and metadata META and generates video file data VF including the video file attribute data VFPD and the parsed plurality of video data PIC (VIDEO). Further, the audio file data AF is generated too in the same way.
1. Field of the Invention
The present invention relates to a program for converting formats of video data and audio data, a data processing method, and a system of the same.
2. Description of the Related Art
One of the file exchange formats of video data and audio data is the “Material Exchange Format (MXF)”. The MXF is composed of metadata included in header data and a plurality of frame data etc. Each frame data includes 1 frame's worth of video data, the audio data corresponding to that, system data indicating attributes of the video data and audio data, etc.
Namely, in MXF data, the video data, the audio data, and the system data are stored interleaved together. In the MXF, by describing in the metadata and system data the attributes such as the coding scheme, compression method, data structure, time code, and edit content of the video data and audio data in each frame data, file exchange is enabled with a format not depending upon the attributes of the video data and audio data.
In a PC or other computer, when performing processing to reproduce video data and audio data, it is necessary for the video data and the audio data to exist as separate video file data and audio file data. Due to this, there is the problem that the computer cannot reproduce video data and audio data by MXF data as it is. Further, when performing processing for reproducing video data and audio data input from a process for conversion in synchronization, due to the processing load, sometimes the video data cannot be reproduced at normal speed or more. In this case, there is the problem that the reproduced image and sound cannot be synchronized. Further, there is a demand to automatically generate and transfer data of the MXF based on video file data and audio file data stored by the computer etc.
Further, the computer must perform conversion processing for converting for example MXF data received by FTP to the video data and the audio data. When the conversion processing is carried out after the processing for reception by FTP is terminated, however, there is the problem that the processing time becomes longer. The same problem exists when performing the processing for converting the received data concerning data other than MXF data.
SUMMARY OF THE INVENTIONA first object of the present invention is to provide a program capable of individually generating video file data and audio file data from data storing video data, audio data, and attribute data interleaved, a method of the same, and a system of the same.
A second object of the present invention is to provide a program capable of generating data storing, mixed together, video data, audio data, and attribute data interleaved from the video file data and the audio file data, a method of the same, and a system of the same.
A third object of the present invention is to provide a program capable of shortening a processing time when converting the reception processed data, a method of the same, and a system of the same.
In order to solve the above problems of the related art, according to a first aspect of the invention, there is provided a program comprising a first routine for parsing data to be processed, the data being mixed together, a plurality of video data, a plurality of audio data, and a plurality of first attribute data indicating attributes of the video data and audio data, a second routine for generating a second attribute data concerning the video data based on the first attribute data parsed by the first routine and generating a video file data including the second attribute data and the plurality of video data parsed by the first routine, and a third routine for generating a third attribute data concerning the audio data based on the first attribute data parsed by the first routine and generating audio file data including the third attribute data and the plurality of audio data parsed by the first routine.
According to a second aspect of the invention, there is provided a data processing method comprising a first step of parsing data to be processed, the data being mixed together, a plurality of video data, a plurality of audio data, and a plurality of first attribute data indicating attributes of the video data and audio data, a second step of generating a second attribute data concerning the video data based on the first attribute data parsed in the first step and generating a video file data including the second attribute data and the plurality of video data parsed in the first step, and a third step of generating a third attribute data concerning the audio data based on the first attribute data parsed in the first step and generating audio file data including the third attribute data and the plurality of audio data parsed in the first step.
The mode of operation of the data processing method of the second aspect of the invention is as follows. First, in the first step, the processed data storing, mixed together, a plurality of video data, a plurality of audio data, and a plurality of first attribute data indicating attributes of the video data and the audio data is parsed. Next, in the second step, the second attribute data concerning the video is generated based on the first attribute data parsed in the first step, and the video file data including the second attribute data and the plurality of video data parsed in the first step is generated. Further, in the third step, the third attribute data concerning the audio is generated based on the first attribute data parsed in the first step, and the audio file data including the third attribute data and the plurality of audio data parsed in the first step is generated.
According to a third aspect of the invention, there is provided a data processing system comprising a first means for parsing data to be processed, the data being mixed together, a plurality of video data, a plurality of audio data, and a plurality of first attribute data indicating attributes of the video data and audio data, a second means for generating a second attribute data concerning the video data based on the first attribute data parsed by the first means and generating video file data including the second attribute data and the plurality of video data parsed by the first means, and a third means for generating third attribute data concerning the audio based on the first attribute data parsed by the first means and generating audio file data including the third attribute data and the plurality of audio data parsed in the first means.
The mode of operation of the data processing system of the third aspect of the invention is as follows. First, in the first means, the processed data storing, mixed together, a plurality of video data, a plurality of audio data, and a plurality of first attribute data indicating attributes of the video data and the audio data is parsed. Next, in the second means, the second attribute data concerning the video is generated based on the first attribute data parsed by the first means, and the video file data including the second attribute data and the plurality of video data parsed by the first means is generated. Further, in the third means, the third attribute data concerning the audio is generated based on the first attribute data parsed by the first means, and the audio file data including the third attribute data and the plurality of audio data parsed by the first means is generated.
According to a fourth aspect of the invention, there is provided a program for making a data processing system execute a first routine for specifying a format based on video attribute data included in the video file data and a second routine for generating data composed of a plurality of module data each including module attribute data defined corresponding to each of a plurality of video data included in the video file data and indicating the format specified by the first routine, a single unit of the video data, and a single unit of audio data corresponding to the video data among the plurality of audio data included in the audio file data.
According to a fifth aspect of the invention, there is provided a data processing method comprising a first step of specifying a format based on video attribute data included in the video file data and a second step of generating data composed of a plurality of module data each including module attribute data defined corresponding to each of a plurality of video data included in the video file data and indicating the format specified in the first step, a single unit of the video data, and a single unit of audio data corresponding to the video data among the plurality of audio data included in the audio file data.
The mode of operation of the data processing method of the fifth aspect of the invention is as follows. In the first step, the format is specified based on the video attribute data included in the video file data. Next, in the second step, the data composed of a plurality of module data each including the module attribute data indicating the format defined corresponding to each of the plurality of video data included in the video file data and specified in the first step, a single unit of the video data, and a single unit of the audio data corresponding to the video data among the plurality of audio data included in the audio file data is generated.
According to a sixth aspect of the invention, there is provided a data processing system comprising a first means for specifying a format based on video attribute data included in the video file data and a second means for generating data composed of a plurality of module data each including module attribute data defined corresponding to each of a plurality of video data included in the video file data and indicating the format specified by the first means, a single unit of the video data, and a single unit of audio data corresponding to the video data among the plurality of audio data included in the audio file data.
The mode of operation of the data processing system of the sixth aspect of the invention is as follows. In the first means, the format is specified based on the video attribute data included in the video file data. Next, in the second means, the data composed of a plurality of module data each including module attribute data indicating the format defined corresponding to each of the plurality of video data included in the video file data and specified in the first means, a single unit of the video data, and a single unit of the audio data corresponding to the video data among the plurality of audio data included in the audio file data is generated.
According to a seventh aspect of the invention, there is provided a program for making a data processing system execute a communication process for performing processing for receiving first data and a conversion process for converting the first data passing the reception processing of the communication process to second data in parallel with the processing for reception by the communication process.
The mode of operation of the program of the seventh aspect of the invention is as follows. The program of the seventh aspect of the invention is executed by the data processing system and makes the data processing system execute the communication process. In the communication process, the processing for receiving the first data is carried out. Further, the program of the seventh aspect of the invention makes the data processing system execute the conversion process. The conversion process converts the first data passing the processing for reception of the communication process to second data in parallel with the processing for reception by the communication process.
According to an eighth aspect of the invention, there is provided a data processing method comprising a first step of performing processing for receiving the first data and a second step performed in parallel with the processing for reception by the first step and converting the first data passing the processing for reception in the first step to the second data.
The mode of operation of the data processing method of the eighth aspect of the invention is as follows. In the first step, the processing for receiving the first data is carried out. Further, in the second step, the processing for converting the first data passing the processing for reception in the first step to the second data is carried out in parallel with the processing for reception by the first step.
According to a ninth aspect of the invention, there is provided a data processing system comprising a first means for performing processing for receiving the first data and a second means operating in parallel with the processing for reception by the first means and converting the first data passed the processing for reception of the first means to the second data.
The mode of operation of the data processing system of the ninth aspect of the invention is as follows. The first means performs the processing for receiving the first data. Further, the second means performs the processing for converting the first data passing the processing for reception in the first step to second data in parallel with the processing for reception by the first means.
These and other objects and features of the present invention will become clearer from the following description of the preferred embodiments given with reference to the attached drawings, wherein:
Below, an explanation will be given of embodimentS of the present invention. First, an explanation will be given of the correspondence between the configuration of the present invention and the configuration of the present embodiment.
Correspondence with First to Sixth Aspects of Invention
The data MXF_D shown in
An MXF processing program PRG1 of the present embodiment corresponds to programs of the first aspect of the invention and the fourth aspect of the invention. Here, an MXF parser thread 43 activated by the execution of the MXF processing program PRG1 corresponds to the first aspect of the invention, and the MXF parser thread 43 is the portion corresponding to the fourth aspect of the invention. The computer 4 corresponds to the data processing systems of the third aspect of the invention and the sixth aspect of the invention. The first routine of the first aspect invention, the first step of the second aspect of the invention, and the first means of the third aspect of the invention are realized by step ST1 shown in
The third routine of the first aspect of the invention, the third step of the second aspect of the invention, and the third means of the third aspect of the invention are realized by steps ST5, ST7, ST13, and ST14 shown in
Correspondence with Seventh to Sixth Aspects of Invention:
The data MXF_D shown in
The MXF processing program PRG1 of the present embodiment corresponds to the program of the seventh aspect of the invention. The computer 4 corresponds to the data processing system of the present invention. The communication process of the seventh aspect of the invention and the first means of the ninth aspect of the invention correspond to an FTP thread 42 of
Next, a brief explanation will be given of the editing system 1 shown in
The MXF-MUX thread 44 shown in
Next, a brief explanation will be given of the editing system 1 shown in
Below, a detailed explanation will be given of the editing system 1 based on the drawings.
[FTP Server 3]
The FTP server 3 transmits the received MXF data to the computer 4 and the computer 5 based on the FTP via the network 2.
[Computer 4]
At the computer 4, for example, as shown in
[Computer 5]
At the computer 5, for example as shown in
The RAID 6 is for recording the MXF data, video file data VD, audio file data AF, and attribute file data PF. Here, the MXF data is defined by the MXF mentioned later. The video file data VF and the audio file data AF have formats able to be utilized (reproduced) by the edit processes 9a and 9b. The attribute file data PF indicates the attributes of the video file data VF and the audio file data AF.
Below, an explanation will be given of the MXF data, the video file data VF, and the audio file data AF.
[MXF_D]
Below, an explanation will be given of the data MXF_D of the MXF format.
The metadata META indicates attributes such as a coding method of the frame data (video data and audio data) stored in the body data BODY, a keyword concerning the content of the frame data, a title, identification data, edit data, preparation time data, and edit time data. Further, the metadata META includes, other than the above, for example a time code concerning the frame data, data for specifying dropped frame data, the above frame number (Duration), etc. The index table INDEXT indicates the data used for accessing the frame data in the body data BODY at a high speed when utilizing the data MXF_D.
The body data BODY includes a plurality of frame data FLD_1 to FLD_n. Here, n is any integer of 1 or more.
Each of the frame data FLD_1 to FLD_n, as shown in
The system data SYS indicates for example the format and type of the video data PIC and the audio data SOU. The system data SYS indicates for example the format of MXF_D (for example D10 standardized by SMPTE) and type in the format (for example IMX50—625, IMX40—625, IMX30—625, IMX50—525, IMX40—525, and IMX30—525 standardized by SMPTE). The system data SYS indicates, other than the above description, for example a coding system, a time code, identification data of the data MXF_D constituted by a “unique material identifier” (UMID), etc.
The video data PIC is the video data encoded by MPEG (Moving Picture Experts Group) or the like. The audio data SOU is the audio data encoded by AES (Audio Engineering Society) 3 or the like. In this way, the data MXF_D is stored in a state with the video data PIC and the audio data SOU interleaved. The footer data FOOTER includes the identification data indicating the terminal end of the data MXF_D.
Each of the above header partition pack HPP, the metadata META, the index data INDEXT, the frame data FLD_1 to FLD_n, and the footer data FOOTER is composed of one or more pack data PACK. Each pack data is composed of one or more KLV data.
[VF]
Below, an explanation will be given of the video file data VF of the present embodiment.
The video common property data VCPD indicates the specific information common to all video formats. The video common property data VCPD indicates, for example, as shown in
The video unique property data VUPD indicates the property information unique to the format designated by the data Videoformat. The video unique property data VUPD indicates for example the type of the noncompression data, the type of the DV format, the type of the MPEG format, and the data type of MPEG. The video owner data VOD indicates the information concerning the application which owns the video file data VF at present. The dummy data DUMY is the data prescribed so that the size of the video file attribute data VFPD becomes 4096 bytes. The data V_SIZE indicates the data size of the video file data VIDEO. The video data VIDEO is the video data of a plurality of frames prescribed so that one frame becomes a whole multiple of 4096 bytes. By this, it becomes possible to access the video data VIDEO using 4096 bytes as the minimum unit. In the data MXF_D, the coding method of the video data VIDEO, the format of the compression method, etc. may be any method, format, etc.
[AF]
Below, an explanation will be given of the audio file data AF of the present embodiment.
The audio property data APRD indicates the data length of the audio data AUDIO, the version of the audio file data AF, etc. The audio owner data AOD indicates the information concerning the application which has the audio file data AF at present. The channel status data CSD indicates the information concerning the channel of the audio data AUDIO. The dummy data DUMY is the data prescribed so that the size of the audio file attribute data AFPD becomes 4096 bytes. The data A_SIZE indicates the data length of the audio data AUDIO. The audio data AUDIO is audio data of a format such as for example the AES (Audio Engineering Society) 3.
[MXF Process 8]
[Thread Manager 41]
The thread manager 41 activates the MXF parser thread 43 and the MXF_MUX thread 44 in response to commands from for example the edit processes 9a and 9b or a request such as the operation signal from the operation unit 12 shown in
[FTP Thread 42]
The FTP thread 42 transfers data MXF_D by FTP with the FTP server 3. The FTP thread 42 outputs the data MXF_D received from the FTP server 3 to the MXF parser thread 43 by FTP. The FTP thread 42 transmits the data MXF_D input from the MXF_MUX thread 44 to the FTP server 3 by FTP.
[MXF Parser Thread 43]
The MXF parser thread 43 converts the data MXF_D received via the FTP thread 42 or the data MXF_D read out from the RAID 6 to the video file data VF and the audio file data AF and writes the same to the RAID 6. Further, the MXF parser thread 43 outputs the video data VIDEO and the audio data AUDIO extracted by parsing the data MXF_D input from the FTP thread 42 to the edit processes 9a and 9b in a format that the edit processes 9a and 9b can reproduce. In the present embodiment, the MXF parser thread 43 is not activated in a state where the MXF parse processing is not carried out. When the MXF parse processing is carried out, the thread manager 41 activates the MXF parser thread 43 in response to the commands from the edit processes 9a and 9b or a request such as the operation signal from the operation unit 12 shown in
Further, in the present embodiment, the FTP thread 42 and the MXF parser thread 43 are realized not by different programs, but by the same MXF process 8. Therefore, when the data MXF_D received by the MXF parser thread 43 via the FTP thread 42 is subjected to conversion processing, the MXF parse processing of the already received data MXF_D can be carried out in parallel with the reception processing of the data MXF_D by the FTP thread 42. Due to this, the processing time can be shortened compared with a case where the program for performing the FTP and the program for performing the MXF parse processing are separately prescribed and the MXF parse processing is carried out after the FTP processing with respect to the whole data MXF_D is ended.
The SYS parse routine 61, the PIC parse routine 62, and the SOU parse routine 63 parse the data MXF_D shown in
The SYS parse routine 61 parses the data MXF_D shown in
Step ST1:
The SYS parse routine 61 parses the data MXF_D read out from the data MXF_D or the RAID 6 input from the FTP thread 42.
Step ST2:
The SYS parse routine 61 decides whether or not the key (K) of the KLV data forming part of the data MXF_D was detected by the parse of step ST1. Where it was detected, the processing routine proceeds to step ST4, while where it was not detected, the processing routine returns to step ST6.
Step ST3:
The SYS parse routine 61 decides whether or not the key (K) detected at step ST1 concerns the 14-th byte (at the predetermined position) of the system data SYS. When deciding that it concerns the 14-th type, the processing routine proceeds to step ST4, while when it does not, the processing routine proceeds to step ST6.
Step ST4:
The SYS parse routine 61 decides whether or not the data MXF_D has a “D10” format (predetermined format) based on the 14-th byte. Where it is the “D10” format, the processing routine proceeds to step ST5, while where it is not, the processing is terminated or the processing concerning another format is carried out.
Step ST5:
When the type of the data MXF_D is IMX50—625, IMX40—625, and IMX30—625, the SYS parse routine 61, based on the 15-th byte of the system data SYS, sets for example the values shown in
Step ST6:
The SYS parse routine 61 decides whether or not the key (K) detected at step ST1 concerns the metadata META or the system data SYS. Where it does, the processing routine proceeds to step ST7, while where it does not, the processing routine proceeds to step ST9.
Step ST7:
The SYS parse routine 61 generates or updates the video file attribute data VFPD shown in
Step ST8:
The SYS parse routine 61 generates or updates the audio file data PF by using XML etc. based on the data (V) corresponding to the key (K) detected at step ST1. Namely, the SYS parse routine 61 generates the attribute file data PF indicating the attributes of the video file data VF and the audio file data AF based on the metadata META in the data MXF_D or the attribute data described in the system data SYS.
Step ST9:
The SYS parse routine 61 decides whether or not the key (K) detected at step ST1 concerns the video data PIC. Where it does, the processing routine proceeds to step ST10, while where it does not, the processing routine proceeds to step ST12.
Step ST10:
The SYS parse routine 61 adds the data length (L) corresponding to the key (K) detected at step ST1 to the data V_SIZE to update the data V_SIZE.
Step ST11:
The SYS parse routine 61 updates (increments) the frame number data FN.
Step ST12:
The SYS parse routine 61 decides whether or not the key (K) detected at step ST1 concerns the audio data SOU. Where it does, the processing routine proceeds to step ST13, while where it does not, the processing is terminated or other processing is carried out.
Step ST13:
The SYS parse routine 61 sets for example the values shown in
Step ST14:
The SYS parse routine 61 adds the data length (L) corresponding to the key (K) detected at step ST1 to the data A_SIZE to update the A_SIZE.
Step ST15:
The SYS parse routine 61 decides whether or not the whole data MXF_D was parsed. When deciding that it was parsed, the processing routine is terminated, while where deciding it was not, the processing routine returns to step ST1.
By the processings of
The PIC parse routine 62 parses the data MXF_D shown in
Step ST21:
The PIC parse routine 62 parses the data MXF_D.
Step ST22:
The PIC parse routine 62 decides whether or not the key (K) of the KLV data composing the data MXF_D was detected by the parse of step ST21. Where it was detected, the processing routine proceeds to step ST22, while where it was not, the processing routine returns to step ST21.
Step ST23:
The PIC parse routine 62 decides whether or not the key (K) detected at step ST21 concerns the video data PIC. When deciding so, the processing routine proceeds to step ST24, while where not, the processing routine returns to step ST21.
Step ST24:
The PIC parse routine 62 decodes the video data PIC by the key (K) detected at step ST21 by the decoding method corresponding to the coding method described in for example the system data SYS or metadata META corresponding to that.
Step ST25:
The PIC parse routine 62 uses the video data PIC obtained by the decoding of step ST24 as the video data VIDEO of the video file data VF shown in
Step ST26:
The PIC parse routine 62 decides whether or not all of the data MXF_D was parsed. When deciding all data MXF_D was parsed, the processing routine is terminated, while where not, the processing routine returns to step ST21.
The SOU parse routine 63 parses the data MXF_D shown in
Step ST31:
The SOU parse routine 63 parses the data MXF_D.
Step ST32:
The SOU parse routine 63 decides whether or not the key (K) of the KLV data composing the data MXF_D was detected by the parse of step ST31. Where it was detected, the processing routine proceeds to step ST32, while where it was not detected, the processing routine returns to step ST31.
Step ST33:
The SOU parse routine 63 decides whether or not the key (K) detected at step ST32 concerns the audio data SOU. When deciding it does, the processing routine proceeds to step ST34, while where deciding it does not, the processing routine returns to step ST31.
Step ST34:
The SOU parse routine 63 decodes the audio data SOU by the key (K) detected at step ST31 by the decoding method corresponding to the coding method described in for example the system data SYS corresponding to that.
Step ST35:
The SOU parse routine 63 uses thdio data AUD SOU obtained by the decoding of step ST34 as the audio data AUDIO of the audio file data AF shown in
Step ST36:
The SOU parse routine 63 decides whether or not the data MXF_D was all parsed. When deciding that the data MXF_D was all parsed, the processing routine is terminated, while when not, the processing routine returns to step ST31.
[MXF_MUX Thread 44]
The MXF_MUX thread 44 generates the data MXF_D based on the attribute file data PF, the video file data VF, and the audio file data AF. In the present embodiment, the MXF_MUX thread 44 is not activated in the state where the MXF_MUX processing is not carried out. Where the MXF_MUX processing is carried out, the thread manager 41 activates the MXF_MUX thread 44 in response to commands from the edit processes 9a and 9b or the request of the operation signal etc. from the operation unit 12 shown in
Step ST41:
The MXF_UX thread 44 reads out the video file data VF shown in
Step ST42:
The MXF_MUX thread 44 decides whether or not the format detected at step ST41 is “D10” (predetermined format). When deciding that the format is “D10”, the processing routine proceeds to step ST43, while when not, it performs the processing corresponding to a format other than “D10”.
Step ST43:
The MXF_MUX thread 44 sets the data indicating “D10” as the 14-th byte (data of the predetermined position) of the system data SYS of the frame data FLD_1 to FLD_n of the data MXF_D shown in
Step ST44:
The MXF_MUX thread 44 generates the header data HEADER shown in
Step ST45:
The MXF_MUX thread 44 encodes the video data VIDEO in the video file data VF by for example the coding method indicated by the video file attribute data VFPD (for example MPEG) to generate the video data PIC.
Step ST46:
The MXF_MUX thread 44 sets the “Channel Status”, “SamplingRate”, “AuxSampleBits”, and “WordLength” of the audio data SOU of the AES3 in the frame data FL_1 to FL_n shown in
Step ST47:
The MXF_MUX thread 44 generates the audio data SOU based on the data set at step ST46 and the audio data AUDIO in the audio file data AF.
Step ST48:
The MXF_MUX thread 44 generates the frame data FLD_1 to FLD_n based on the system data SYS, the video data PIC, the audio data SOU generated at the steps ST43 to ST47, and newly generated data AUX. Further, the MXF_MUX thread 44 generates the data MXF_D comprised of the header data HEADER generated at step ST44, the generated frame data FLD_1 to FLD_n, and newly generated footer data FOOTER and writes it to the RAID 6.
In the present embodiment, the MXF_MUX thread 44 directly receives the video file data VF and the audio file data AF and performs the processing as shown in
[Edit Processes 9a and 9b]
Step ST51:
The command thread 51 outputs for example the parse command PARSE COMD in accordance with the operation signals from the operation units 12 and 13 shown in
Step ST52:
The thread manager 41 outputs an acknowledgement ACK including an interface name (for example pipe name) used for the data transfer with the command thread 51 to the command thread 51. Thereafter, the transfer of the data and request between the thread manager 41 and the command thread 51 is carried out by designating the pipe name.
Step ST53:
The status thread 52 outputs designation information (for example the address of URL etc.) of the data MXF_D as the object of the parse and MXF, the identification data ID, password PASS, etc. of the user of the edit processes 9a and 9b to the MXF parser thread 43.
Step ST54:
The MXF parser thread 43 performs the parse processing mentioned before based on
Step ST55:
The MXF parser thread 43 outputs the end code to the status thread 52 when completing the parse processing for all frame data FLD_1 to FLD_n in the data MXF_D. For example, the edit process 9a on the computer 4 shown in
On the other hand, the edit process 9b on the computer 5 shown in
[Reproduction Process 80]
The computer 4 or the computer 5 executes for example the predetermined reproduction program to activate the reproduction process 80 shown in
The video render routine 72 performs the processing for reproduction of the video data VIDEO input from the PIC parse routine 62. By this, an image in accordance with the video data VIDEO is displayed on the display units 13 and 23. In this case, the video render routine 72 performs the synchronization processing with the audio render routine 74 so that for example the reproduced image and sound are synchronized. The video render routine 72 decides whether or not for example the previous reproduction processing time of the video data VIDEO corresponding to one frame data was longer than the time determined in advance as the time for reproducing one set of frame data. When deciding it is longer, as shown in
The UI audio channel selection routine 73 performs processing for selecting channels not to be reproduced in the reproduction thread 54 when the audio data of a plurality of channels prescribed by the AES3 are included in the audio data SOU of for example the frame data FLD_1 to FLD_n of the data MXF_D. The UI audio channel selection routine 73 outputs an unrequired channel designation request CH_REQ specifying the channels not to be reproduced designated by the user to the SOU parse routine 63.
The audio render routine 74 performs the processing for reproduction of the audio data AUDIO input from the SOU parse routine 63. By this, sound is output in accordance with the audio data AUDIO.
Below, an explanation will be given of the processing of the MXF parser thread 43 of the MXF process 8 of outputting the video data VIDEO and the audio data AUDIO to the reproduction thread 83 in response to commands from the edit processes 9a and 9b or requests of the operation signals etc. from the operation unit 12 shown in
Step ST71:
The SYS parse routine 61 of the MXF parser thread 43 shown in
Step ST72:
The parse routine 61 moves for example a seek pointer SP indicating the reading position (address) of the data MXF_D recorded in the RAID 6 to the address corresponding to the frame number indicated by the seek request SEEK_REQ received at step ST71. The seek pointer SP is updated to the address of the frame data to be read out next whenever the frame data of the data MXF_D is read out. In this way, by moving the seek pointer SP preceding the processings of the SYS parse routine 61, the PIC parse routine 62, and the SOU parse routine 63, the seek operation becomes possible without exerting an influence upon the processing of the PIC parse routine 62 and the SOU parse routine 63, and the configuration of the MXF parser thread 43 can be simplified.
Step ST73:
The SYS parse routine 61 reads out the system data SYS of the frame data FLD recorded at the address on the RAID 6 indicated by the seek pointer SP and performs the parse processing.
Step ST74:
The PIC parse routine 62 reads out the video data PIC continuing from the system data read out at step ST73 from the RAID 6 and performs the parse processing.
Step ST75:
The PIC parse routine 62 decides whether or not a drop request DROP_REQ was input from the video render routine 72. When deciding that the drop request was input, the processing routine proceeds to step ST76, while when not, the processing routine proceeds to step ST77.
Step ST76:
The PIC parse routine 62 decodes the video data PIC read out at step ST74 by the decoding method corresponding to the coding method indicated by the system code SYS read out at step ST73 to generate the video data VIDEO and outputs this to the video render routine 72. Namely, when receiving the drop request DROP_REQ, by steps ST75 and ST76, the PIC parse routine 62 suspends the output of 1 frame's worth of the video data VIDEO to the video render routine 72. Note that, it is also possible to suspend the output of 2 frames' worth or more of the video data VIDEO.
Step ST77:
The SOU parse routine 63 reads out the audio data SOU continuing from the video data PIC read out at step ST74 and performs the parse processing.
Step ST78:
The SOU parse routine 63 decides whether or not the unrequired channel designation request CH_REQ was input from the UI audio channel selection routine 73. When deciding that it was input, the processing routine proceeds to step ST79, while when not, the processing routine proceeds to step ST80.
Step ST79:
The SOU parse routine 63 separates the audio data SOU read out at step ST77 to a plurality of channels of audio data AUDIO, selects and decodes the audio data AUDIO of the channels which are not designated by the unrequired channel designation request CH_REQ among them, and outputs the same to the audio render routine 74. In this case, the SOU parse routine 63 performs the decoding of the audio data AUDIO by the decoding method corresponding to the coding method indicated by the system code SYS read out at step ST73.
Step ST80:
The SOU parse routine 63 outputs all of the audio data AUDIO of the plurality of channels comprised of the audio data SOU read out at step ST77 to the audio render routine 74. Note that, in the processing shown in
Step ST91:
The command thread 81 outputs for example the play command PLAY_COMD in response to for example the operation signals from the operation units 12 and 13 shown in
Step ST92:
The thread manager 41 outputs an acknowledgement ACK including the name of the pipe as the interface used for the communication to the command thread 51 in response to the play command PLAY_COMD.
Step ST93:
The status thread 82 outputs the designation information of the data MXF_D as the object of the reproduction (for example address of URL etc.), the identification data ID, password PASS, etc. of the user of the reproduction process 80 to the MXF parser thread 43.
Step ST94:
The status thread 82 outputs the reproduction request R_REQ including the frame numbers of the frame data FLD_1 to FLD_n to be reproduced next to the MXF parser thread 43 in accordance with the progress of the reproduction processing.
Step ST95:
The MXF parser thread 43 performs the parse processing explained by using
Step ST96:
The status thread 82 outputs a play end request PLAY_END to the MXF parser thread 43.
Step ST97:
The MXF parser thread 43 terminates the parse processing in response to the play end request PLAY_END. Then, the MXF parser thread 43 outputs the end code to the status thread 82.
In this way, the MXF parser thread 43 has the function of receiving the play end request PLAY_END from the status thread 82 and thereby is able to give priority to the edit processing by the edit process 8a when the MXF process 8 and the edit process 9a are operating on the same computer like the computer 4 and able to efficiently perform the editing work.
Below, an explanation will be given of an example of the main operation of the editing system 1 according to the first to sixth aspects of the invention.
[Case where Video File Data VF etc. is Generated from the Data MXF_D]
The MXF parser thread 43 of the MXF process 8 operating on the computer 4 generates the video file data VF comprised of the video attribute file data VFPD generated by the SYS parse routine 61 as mentioned above based on
[Case where the Data MXF_D is Generated from the Video File Data VF etc.]
The MXF_MUX thread 44 of the MXF process 8 operating on the computer 4 generates the data MXF_D from the video file data VF etc. by the processing shown in
Below, an explanation will be given again of an example of the main operation of the editing system 1 according to the seventh to ninth aspects of the invention. The MXF process 8 operating on the computer 4, as shown in
As explained above, according to the computer 4, the video file data VF shown in
Further, according to the computer 4, by converting the data MXF_D to the data which can be reproduced by the reproduction process 80 by the MXF parser thread 43, the reproduction processing by the reproduction process 80 based on the data MXF_D becomes possible. Further, according to the computer 4, as shown in
Further, according to the computer 4, by the MXF parser thread 43 performing the seek processing by receiving the seek request SEEK_REQ as shown in
The present invention is not limited to the above embodiments. For example, in the above embodiments, the video file data VF and the audio file data AF were exemplified as the formats which could be processed by the edit processes 9a and 9b, but the format is not particularly limited so far as it is a format which can be processed on a general computer, and other than the above, a format such as RGB format or YUV format can be used too as the video data. Further, in the above embodiments, the data MXF_D was exemplified as the processed data of the present invention, but the present invention can use data other than the data MXF_D as the processing data too so far as it is data storing, mixed together, a plurality of audio data, and a plurality of first attribute data indicating attributes of the video data and the audio data.
According to the first to third aspects of the invention, a program capable of individually generating video file data and audio file data from data storing, mixed together, the video data, the audio data, and the attribute data, a method of the same, and a system of the same can be provided. According to the fourth to sixth aspects of the invention, a program capable of generating data storing, mixed together, the video data, the audio data, and the attribute data from the video file data and the audio file data, a method of the same, and a system of the same can be provided. Further, according to the seventh to ninth aspects of the invention, a program capable of shortening the processing time where the reception processed data is conversion processed, a method of the same, and a system of the same can be provided.
INDUSTRIAL CAPABILITYThe present invention can be applied to a system for converting the format of data concerning video data and audio data.
Claims
1-10. (canceled)
11. A program for making a data processing system execute:
- a first routine for specifying a format based on video attribute data included in the video file data and
- a second routine for generating data composed of a plurality of module data each including module attribute data defined corresponding to each of a plurality of video data included in the video file data and indicating the format specified by the first routine, a single unit of the video data, and a single unit of audio data corresponding to the video data among the plurality of audio data included in the audio file data.
12. A program as set forth in claim 11, wherein:
- said video data is one frame's worth of video data,
- said program further includes a third routine for specifying a number of frames defined by said plurality of video data based on the total data length of said plurality of video data included in said video file data and the data length included in said video file data, and
- said second routine generates said data comprised of attribute data showing the number of frames specified by said third routine and said plurality of module data.
13. A program as set forth in claim 12, wherein said second routine generates said attribute data further indicating a time code of said video data based on attribute file data indicating attributes of said video file data and said audio file data.
14. A program as set forth in claim 11, wherein said second routine generates said data including at least one attribute data of a compression method of said video data and audio data, a key word, title, identification data, edit content, preparation time, and edit time and said plurality of module data.
15. A program as set forth in claim 11, wherein said second routine generates each of said module attribute data, said video data, and said audio data included in each of said plurality of module data by said plurality of unit data including identification data for identifying said unit data and the data proper.
16. A data processing method comprising:
- a first step of specifying a format based on video attribute data included in the video file data and
- a second step of generating data composed of a plurality of module data each including module attribute data defined corresponding to each of a plurality of video data included in the video file data and indicating the format specified in the first step, a single unit of the video data, and a single unit of audio data corresponding to the video data among the plurality of audio data included in the audio file data.
17. A data processing system comprising:
- a first means for specifying a format based on video attribute data included in the video file data and
- a second means for generating data composed of a plurality of module data each including module attribute data defined corresponding to each of a plurality of video data included in the video file data and indicating the format specified by the first means, a single unit of the video data, and a single unit of audio data corresponding to the video data among the plurality of audio data included in the audio file data.
18-28. (canceled)
Type: Application
Filed: Aug 3, 2007
Publication Date: Jan 1, 2009
Inventor: Shin Kimura (Kanagawa)
Application Number: 11/890,158
International Classification: H04N 7/173 (20060101);