Method and apparatus for video communication over a limited bandwidth medium

A method and system for constructing at least one intermediate frame of an image between first and second frames in a system such as a wired or wireless telephone network. The system identifies a plurality of points having at least one related characteristic in at least one of the first and second frames. The system determines if at least one of the plurality of points has changed its position between the first frame and the second frame. The system associates the at least one of the plurality of points that has changed its position with at least a first pixel in the first frame and a second pixel in the second frame. The system determines a relationship between a position of the first pixel and a position of the second pixel.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATION

[0001] This application is a continuation-in-part of co-pending U.S. patent application Ser. No. 09/927,132 entitled “METHOD AND APPARATUS FOR VIDEO COMMUNICATION OVER A LIMITED BANDWIDTH MEDIUM” and filed on Aug. 10, 2001, which claims priority to U.S. Provisional Application No. 60/224,254 entitled “SYSTEM, APPARATUS AND METHOD FOR TRANSMISSION OF VIDEO ACROSS LIMITED BANDWIDTH TRANSMISSION MEDIA” and filed on Aug. 10, 2000. The entire disclosure of the foregoing filed applications is hereby incorporated by reference.

BACKGROUND OF THE INVENTION

[0002] 1. Field of the Invention

[0003] The invention relates generally to a communications system that communicates data over a narrow or limited bandwidth medium. More particularly, the invention relates to a method and apparatus for transmission of video over narrow-band transmission channels, such as plain old telephone service (POTS) lines.

[0004] 2. Description of the Related Art

[0005] Video telephones have existed within the marketplace for several years with limited commercial success. The lack of success of videophones is attributable in part to the fact that they do not work very well. It has been problematic to transmit sound and video of acceptable quality across existing telephone lines.

[0006] Some available video conferencing systems produce acceptable video and audio quality, and have met with some commercial success. These video conferencing systems depend on wide bandwidth communication connections such as leased lines, ISDN (Integrated Services Digital Network), DSL (Digital Subscriber Lines) and the like. The high bandwidth is necessary to produce acceptable audio and video quality.

[0007] The available bandwidth on standard telephone lines has been too low to support industry standard 30 frames per second video. Currently, using compression, the best performance obtainable on standard U.S. telephone lines is approximately 15 video frames per second in one direction. Because 15 video frames per second is less than the persistence of the human eye, which is generally about 24 frames per second, the 15 video frames per second results in a jerky unacceptable video quality. Even with expensive compression hardware, the quality of the resultant video may be unacceptable.

[0008] There is therefore a need for video communications systems, which do not depend on expensive compression hardware and yet yield an acceptable video display when transmitted bi-directionally across standard analog telephone lines.

SUMMARY OF THE INVENTION

[0009] In one embodiment, the invention provides a method of constructing at least one intermediate frame of an image between first and second frames. The method comprises identifying a plurality of points having at least one related characteristic in at least one of the first and second frames. The method further comprises determining if at least one of the plurality of points has changed its position between the first frame and the second frame. The method further comprises associating the at least one of the plurality of points that has changed its position with at least a first pixel in the first frame and a second pixel in the second frame. The method further comprises determining a relationship between a position of the first pixel and a position of the second pixel.

[0010] In another embodiment, the invention provides a system for constructing at least one intermediate frame of an image between first and second frames. The system comprises an identifier circuit configured to identify a plurality of points having at least one related characteristic in at least one of the first and second frames. The system further comprises a compare circuit configured to determine if at least one of the plurality of points has changed its position between the first frame and the second frame. The system further comprises a processing circuit configured to associate the at least one of the plurality of points that has changed its position with at least a first pixel in the first frame and a second pixel in the second frame. The processing circuit is further configured to determine a relationship between a position of the first pixel and a position of the second pixel.

BRIEF DESCRIPTION OF THE DRAWINGS

[0011] FIG. 1 is a block diagram of one embodiment of a system for video communication over a limited bandwidth medium.

[0012] FIG. 2 is an illustration of a video stream generated at one end of and for transmission over the limited bandwidth video communication system of FIG. 1.

[0013] FIG. 3 is an illustration of the frames selected from the video stream illustrated in FIG. 2 for transmission over the video communication system of FIG. 1.

[0014] FIG. 4 is an illustration of the video frames displayed at a destination, including the transmitted frames from FIG. 3 and reconstructed frames.

[0015] FIG. 5 is a flow diagram illustrating one embodiment of a method of reconstructing intermediate frames at a transceiver based.

[0016] FIG. 6 is a flow diagram illustrating one embodiment of a method of identifying elements of a changing object in a video stream.

[0017] FIGS. 7A-B are a flow diagram illustrating one embodiment of a method of identifying elements as border elements of an object, and outlining and positioning the object in a video frame.

[0018] FIG. 8 is a flow diagram illustrating one embodiment of a method of outlining an object in accordance with the method illustrated in FIG. 7.

[0019] FIG. 9A is an illustration of four video frames for an example of a ball moving across a solid background.

[0020] FIG. 9B is an illustration of the frame representations corresponding to the difference matrices for the video frames of FIG. 9A.

[0021] FIG. 10A is a more detailed illustration of the four video frames of FIG. 9A identifying points for use in determining the motion equation for the ball. FIG. 10B is a more detailed illustration of the frame representations of FIG. 9B identifying points for use in determining the motion equation for the ball.

[0022] FIGS. 11A-11D illustrate one embodiment of a method of reconstructing video frames for video stream output at the video communication destination.

[0023] FIG. 12 is a block diagram of one embodiment of a source transceiver circuit.

[0024] FIG. 13 is a block diagram of one embodiment of a destination transceiver circuit.

DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS

[0025] Embodiments of the invention will now be described with reference to the accompanying Figures, wherein like numerals refer to like elements throughout. The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner, simply because it is being utilized in conjunction with a detailed description of certain specific embodiments of the invention. Furthermore, embodiments of the invention may include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing the inventions herein described.

[0026] FIG. 1 is a block diagram of one embodiment of a video communication system 20 configured to communicate video over a limited bandwidth medium. The system 20 comprises a video source 22 coupled to a source transmitter/receiver (transceiver) 24. The video source 22 may be, for example, an image capture device such as a camera, a video signal generator such as a digital video disc (DVD) player, video cassette recorder (VCR), or any device (e.g., a computer) having video output, or a communication station receiving a video signal from a remote source. The source transceiver is configured to communicate with a destination transceiver 26 via a plain old telephone service (POTS) 28, also referred to as a public switched telephone network (PSTN). As will be appreciated by those skilled in the art, the transceivers 24, 26 may also be configured for wireless communication. For the convenience of description, some elements of the video communication system 20 are referred to herein with the labels “source” and “destination” for illustrating the direction of video transmission only in this example, however, these labels in no way limit the scope of the invention.

[0027] The destination transceiver 26 is coupled to a video display 34, which is configured to display video received at the destination transceiver 26. The system may also include an additional video source 36 coupled to the destination transceiver 26 and the display 34, and an additional display 38, coupled to the video source 22 and the source transceiver 24.

[0028] In one embodiment, the video source 22 comprises an image capture device capturing images at a rate of, for example, at least 30 frames per second. The source transceiver 24 communicates with the destination transceiver 26 to test the transmission line for the highest acceptable frame transmission rate, and subsequently transmits images to the destination transceiver 26 at a rate accepted by the transmission line 28, such as 1 frame per second. In one embodiment, the frame transmission rate is m&Dgr;Ximized along while guaranteeing an acceptable data transmission accuracy, in which case the frame transmission rate may be greater than one frame per second. The source transceiver 24 also communicates with the destination transceiver 26 when verification or validation is requested by the destination transceiver.

[0029] As stated above, the destination transceiver 26 communicates with the source transceiver 24 to test the transmission line 28 to determine the highest acceptable frame transmission rate. If the quality of the data received by the destination transceiver 26 becomes unacceptable, then the destination transceiver 26 coordinates with the source transceiver 24 to dynamically change the frame transmission rate. As described in further detail below, the destination transceiver 26 reconstructs a video stream from the images or frames received from the source transceiver 26, validating and verifying with the source transceiver 24 regarding any part of the video stream reconstruction process. In one embodiment, the reconstructed video stream includes up to 30 frames per second for display on the display 34.

[0030] In reference to FIG. 2, the source transceiver 24 receives a video stream 100 comprising a plurality of frames from the video source 22. In this illustrative embodiment, the source transceiver 24 selects the first of every 30 frames (unshaded Frame 1, Frame 2, Frame 3, and Frame 4) from the video stream for transmission. The frames interspersed between the transmitted frames, referred to herein as intermediate frames, are stored at the source transceiver for a predefined time period, such as 30 seconds. Thus as used herein, the “intermediate” frame does not only refer to a frame created in the middle of two transmitted frames, but also refers to any frame created between the two transmitted frames. In one embodiment, the predefined storage period for the intermediate frames is at least the length of time the destination transceiver 26 needs to query the source transceiver 24 for information regarding the intermediate frames.

[0031] As illustrated in FIG. 3, the source transceiver 24 transmits the selected frames 105, comprising Frame 1, Frame 2, Frame 3, and Frame 4, to the destination transceiver 26. The source transceiver 24 may continue to transmit frames in this format for some length of time. When the destination transceiver 26 identifies a problem or difficulty in reconstruction, the destination transceiver 26 requests information from the source transceiver 24. The evenly disbursed frames 105 of video information are received at the destination transceiver 26 and used to reconstruct 29 frames between each received frame as described in detail below. FIG. 4 illustrates the combination of the reconstructed frames with the received frames (Frame 1, Frame 2, Frame 3, Frame 4), which form a continuous video stream 110 for display on the video display 34. It is desirable to have the reconstructed video stream 110 resemble the originally inputted video stream 100 (see FIG. 2).

[0032] I. Adjustment of Frame Transmission Rate

[0033] The channel bandwidth used by the video communication system 20 can be reduced by minimizing the amount of information transmitted between the transceivers 24, 26. For example, a reduction in the frame transmission rate over the transmission line 28 corresponds to a reduced use of bandwidth.

[0034] In one embodiment of frame transmission rate adjustment, the frames of information sent from the source transceiver 24 are compared to determine the overall percentage of changing pixels from one frame to the next. The destination transceiver 26 compares the individual pixels of one frame are to the individual pixels of another frame to determine whether any of the information for each pixel changes between frames, or whether the pixel information changes a predefined amount above or below a threshold. In one embodiment, the threshold is a percentage of total pixels having different pixel information from one frame to the next, and may be in the range from 40% to 80%. For example, the video frames sent from the source transceiver 24 (Frame 1, Frame 2, Frame 3, Frame 4) are identified as being in over 60% change when over 60% of the pixels in a frame have changing pixel information from one frame to the next (Frame 1 to Frame 2, for example). In the event the pixel change threshold for transmitting one frame per second is 60%, then the source transceiver 24 will continue to send the first frame of every 30 to the destination transceiver 26 in this example. However, once the total change rate drops to less than 60%, the destination transceiver 26 increasingly uses the information in the destination buffers and may send video frames less frequently than one frame per second. For example, only a small portion of a series of video frames may be changing over time, such as a blinking light making up less than 60% of the total pixels in a video frame. In this case, the source transceiver 24 may reduce the frame transmission rate to below one frame per second and only send information regarding the pixels with changing information from one frame to the next. Thus, the source transceiver 24 may send full frames only when necessary or desired, and may proceed to send only object and background information, while keeping this information separate and linked to its associated frames and objects. As used herein, the term “object” is not necessarily limited to a “physical” object as viewed by an eye of an observer, but is a term that refers to two or more (i.e., a group of) pixels that have common or related characteristics, e.g., two or more pixels that undergo the same positional or rate of motion, the same rate of rotation, and/or the same rate of content change, etc. If desired, a complete frame, background or object may be sent intact, where the video stream reconstruction process is substantially the same regardless of the method in which the frames are generated at the destination transceiver 26.

[0035] II. Categorization of Frame Contents

[0036] In one embodiment, the contents of received frames are categorized according to a variety of criteria. In one embodiment, all contents of a frame (i.e., individual pixels) are first categorized as either part of the background or part of an object. Second, an object is categorized as either a stationary object or a moving object. Third, a moving object is categorized by its type of motion, where the motion includes (1) translation, (2) rotation, or (3) spinning, or any combination of these three types of motion. For a moving object, the system determines motion equations to determine the location of the object in intermediate frames. For a stationary object, the system determines the pixel value of each changing element in the object in intermediate frames. Each object will have either a motion equation or a stationary equation.

[0037] According to one embodiment of a method of categorizing the contents of a series of frames, the destination transceiver 26 first locates stationary elements in the series of frames, followed by identification of the pixels in motion. For illustration, a four (4) frame comparison is used to find the stationary elements and pixels in motion. Each frame comprising a plurality of pixels is mapped mathematically as a corresponding matrix XN, wherein the subscript designation “N” for each matrix X corresponds to the frame number. The physical location of elements xij in the matrix correspond to the physical location of a pixel in a video frame. The subscripts “i” and “j” correspond to the row and column location, respectively, of the element x in the matrix X. The numerical value of an element in the matrix is the value of the pixel in the corresponding frame, and may include, for example, color information such as values corresponding to the level of red, green, and/or blue. The following matrices X1 through X4 are defined for a four frame comparison, wherein the designated frame (e.g., Frame 1) is the first frame in a series of 30 frames:

Frame 1=X1(matrix of pixels)  (1)

Frame 2=X2 (matrix of pixels)  (2)

Frame 3=X3 (matrix of pixels)  (3)

Frame 4=X4 (matrix of pixels) (4)

[0038] III. Overview of Method of Reconstructing Video Frames Using Limited Set of Frames

[0039] One embodiment of a method 50 of reconstructing video frames using a limited set of frames is illustrated in the flow diagram of FIG. 5. In one embodiment, the method 50 is performed by the destination transceiver 26, but could be performed by the source transceiver 24, or a combination of the source and destination transceivers 24, 26. The method 50 begins in a step 55 and proceeds to a step 60, wherein the destination transceiver identifies an object or plurality of objects in a set of frames (e.g., Frame 1, Frame 2, Frame 3, and Frame 4). In this embodiment, the source transceiver sends the identified frames to a destination receiver. By identifying one or more objects in step 60, the remaining pixels or elements in the frame not identified as part of an object are categorized as part of the stationary background. Following step 60 the method 50 proceeds to a step 65, wherein the destination transceiver 26 determines whether an identified object is in motion. If the answer to the determination in step 65 is no, i.e., the object is not in motion, then the method 50 proceeds to step 70 where the destination transceiver 26 categorizes the object as a stationary object. If the answer to the determination in step 65 is yes, i.e., the object is in motion, then the method 50 proceeds to step 75 where the destination transceiver 26 categorizes the object as an object in motion and determines a motion equation for the object. Steps 65 and 70 or 75 are repeated for all objects identified in step 60.

[0040] For objects identified as stationary objects as determined in step 70, the destination transceiver 26 determines the pixel values (e.g., color component information) for the stationary objects in intermediate frames. The destination transceiver may use one of several principles for determining a pixel value in an intermediate frame for the stationary object. In one embodiment, the destination transceiver may use the same value of the pixels of the stationary object as found (a) in Frame 1, (b) in Frame 2, and/or (c) from deriving an average of pixel information for each pixel of the stationary object in Frame 1 and Frame 2. In another embodiment, the destination transceiver may request pixel information for one or more pixels of the stationary object from the source transceiver. For pixels in background, the pixel values are substantially the same in Frame 1, the intermediate frame, and Frame 2.

[0041] For objects identified as objects in motion and having a corresponding motion equation as determined in step 75, then the method 50 proceeds to a step 85. In step 85, the destination transceiver executes the motion equations, thereby determining the locations of the objects and pixel values for the objects in intermediate frames. Following step 85, the destination transceiver uses the determinations from steps 60, 75, and 85 to reconstruct intermediate frames in a step 90. In step 90, pixels or elements in a set of frames not identified as objects in step 60 are mapped across the intermediate frame as part of a stationary background. Also in step 90, the pixel values for stationary objects determined in step 70, and object location and pixel value information for objects in motion are used to reconstruct intermediate frames. Following reconstruction of intermediate frames in step 90, the method 50 proceeds to end in a step 95. Each of the steps in the method 50 are described in further detail hereinafter below.

[0042] IV. Background Elements

[0043] All pixels in a frame that are not part of an object are defined as being on a background plane, and therefore not as objects to be dealt with mathematically. Background planes may have color and/or shade changes, so color and/or shade changes are not always reliable parameters to define objects. Nevertheless, this information is used to develop basic information about the video frames being communicated and, such information may therefore be communicated to the destination transceiver.

[0044] In order to find the stationary or background elements in the stream of frames from Frame 1 to Frame 4, the destination transceiver 26 compares (e.g., subtracts from one another) the frame matrices (X1, X2, X3, and X4) corresponding to each of the four frames to obtain a plurality of difference matrices &Dgr;XNN. The comparison of Frame 1 to Frame 2 is represented by the difference matrix &Dgr;X12, for example. In the present embodiment, the source transceiver 24 transmits one frame every second, such that the time between transmission of Frame 1 and Frame 2 is one second, and the time between transmission of Frame 1 and Frame 4 is three seconds. Each difference matrix therefore also has a corresponding time differential &Dgr;TNN, wherein the time differential &Dgr;T12 for the difference matrix &Dgr;X12, for example, is one second. Thus, the following difference matrices are defined using the frame matrices X1, X2, X3, and X4, and each difference matrix is made up of difference matrix elements &Dgr;xij:

&Dgr;X12=X1X2, where &Dgr;T12=1 sec  (5)

&Dgr;X13=X2−X3, where &Dgr;T13=2 sec  (6)

&Dgr;X14=X1−X4 where &Dgr;T14=3 sec  (7)

&Dgr;X23=X2−X3 where &Dgr;T23=1 sec  (8)

&Dgr;X34=X3−X4, where &Dgr;T34=1 sec  (9)

[0045] According to the above relationships, each zero valued element &Dgr;xij in a difference matrix indicates an element that is stationary between the original matrix frame and the corresponding final matrix frame in the difference matrix. A stationary element indicated by zero values in the difference matrices is categorized as either part of the stationary background or inside a non-moving object. Stationary or background elements, once identified by the destination transceiver 26 using the difference matrices, are mapped into memory. Any non-zero values in the difference matrices define elements in motion or that are changing. Only the non-zero or non-stationary elements from the difference matrices are evaluated first for motion and second for stationary changes.

[0046] In one embodiment, non-zero elements in the difference matrices are determined using a threshold greater than but close to zero. For example, elements that are changing more than 5% are determined to be non-zero elements, and elements that are changing less than 5% are determined to be zero elements in the difference matrix. In one embodiment, the percentage change of an element refers to the percentage change in a pixel value, where a pixel is defined as an 8 bit binary number. The threshold may be 1%, 2%, 3%, 4%, 5%, 6%, 7%, 8%, 9%, 10%, 11%, 12%, 13%, 14%, or 15%, for example, and may change depending on the image capture device used and the environment of the subject filmed. In addition, the threshold may by adjusted dynamically.

[0047] V. Defining Objects

[0048] As discussed above, an object is identified as either an object in motion or a stationary object. A moving object has a definable border that moves on some foreground plane, wherein moving objects are defined by united (e.g., adjacent) pixel motion using a base equation with the same or different coefficients. A stationary changing object generally refers to an object with a definable border that has little or no motion associated with it, yet it may contain variations in color and intensities across the stream of frames. This description first provides characteristics of an object in motion.

[0049] As noted above, a moving object's video motion is its physical movement inside the stream of frames or matrices, from X1 to X2 to X3 to X4 . . . to XN. A moving object's border may be semi-continuous and is normally different in color than the background or other objects in the sequence of frames. The elements that make up an object may contain variations in color and intensity, and the number of elements that make up an object may increase or decrease as the object gets larger or smaller, moves closer to or farther away from the image capture device, or changes shape. The elements that make up a moving object, including the border, may also move at different rates, due to deformation or rotation of the object, for example.

[0050] An element on a moving object's border is an element that is in motion change with respect to the background. A moving object's border, or outline, comprises single elements of like motion change that have at least two elements of like motion change on adjacent sides, and at least one element of unlike motion change adjacent to it, thus forming a connecting border outlining the moving object. This concept will be discussed in further detail hereinafter below.

[0051] A stationary changing object is an object with substantially little or no motion associated with it, yet it may contain variations in color and intensities across the changing stream of frames or matrices, from X1 to X2 to X3 to X4 . . . to XN. An example of a stationary changing object would be a flashing light, where the object is not moving from frame to frame, but is changing in color and intensity.

[0052] One embodiment of a method of identifying an object is illustrated in the flow chart of FIG. 6. The method begins at block 500. In a step 505, the destination transceiver 26 locates the first non-zero element &Dgr;xij in the first difference matrix &Dgr;X12 from Frame 1 to Frame 2 by searching each row of matrix elements left to right, top to bottom.

[0053] To verify that the non-zero element is not just a bad or erroneous element (such as ‘snow’), the destination transceiver 26 determines, in a step 510, whether any of the adjacent or related elements (pixels) &Dgr;x(i−1)(j−1), &Dgr;x(i−1)j, &Dgr;x(i−1)(j+1), &Dgr;xi(j−1), &Dgr;xi(j+1), &Dgr;x(i+1)(j−1), &Dgr;x(i+1)j, and &Dgr;x(i+1)(j+1) in the difference matrix &Dgr;X12 are non-zero. If all of the adjacent elements are zero, then element xij is identified as a bad element and not an element in change, where element xij is actually part of the stationary background or is an inside part of a major object. If one or more of the adjacent elements have a non-zero value, then the original element is an element in change and is either on the border of a moving object, a changing part of the stationary background, or an inside part of a rotating object. The difference matrix &Dgr;X12 is then updated in a step 515 with the true zero value of the element if none of the adjacent elements are found to have a non-zero value. In a step 520, steps 505 through 515 are repeated for all elements in &Dgr;X12, &Dgr;X23, and &Dgr;X34. The method illustrated in FIG. 6 is described in further detail hereinafter below with reference to FIG. 7.

[0054] After step 520, the elements in &Dgr;X12, &Dgr;X23 and &Dgr;X34 that are not equal to zero are defined as elements in change and are part of an object in motion or a stationary changing object. Using the updated difference matrices &Dgr;X12, &Dgr;X23 and &Dgr;X34, an object's border elements are identified and the object is outlined and positioned in the appropriate frame. The motion equations associated with each potential object are then determined, along with the other elements that make up the object, thus defining substantially all elements of the moving objects.

[0055] A. Locating an Object's Border

[0056] FIGS. 7A-B are a flow diagram illustrating a method of identifying elements as border elements of an object, and outlining and positioning the object in a video frame. In reference to FIG. 7A, the method 600 begins in a step 601 and proceeds to a step 605. In step 605, the destination transceiver 26 locates the first non-zero element in a difference matrix &Dgr;X12, &Dgr;X23 and &Dgr;X34, thus recognizing part of a moving or changing object's border or a bad element. For example, the first non-zero element &Dgr;xij in the difference matrix &Dgr;X12 is located in step 605 by searching rows, e.g., left to right, starting from the first row of the matrix and working down. 1 Δ ⁢   ⁢ X 12 = [ Δ ⁢   ⁢ x 11 Δ ⁢   ⁢ x 12 Δ ⁢   ⁢ x 13 … Δ ⁢   ⁢ x 1 ⁢ _ ⁢ 176 Δ ⁢   ⁢ x 21 Δ ⁢   ⁢ x 22 Δ ⁢   ⁢ x 23 … Δ ⁢   ⁢ x 2 ⁢ _ ⁢ 176 Δ ⁢   ⁢ x 31 Δ ⁢   ⁢ x 32 Δ ⁢   ⁢ x 33 … Δ ⁢   ⁢ x 3 ⁢ _ ⁢ 176 … … … … … Δ ⁢   ⁢ x 144 ⁢ _ ⁢ 1 Δ ⁢   ⁢ x 144 ⁢ _ ⁢ 2 Δ ⁢   ⁢ x 144 ⁢ _ ⁢ 3 … Δ ⁢   ⁢ x 144 ⁢ _ ⁢ 176 ] ( 10 )

[0057] To verify that the first non-zero element &Dgr;xij is not just a bad element, but is an element in motion or change, the destination transceiver 26 analyzes the adjacent or related elements (pixels) &Dgr;x(i−1)(j−1), &Dgr;x(i−1)j, &Dgr;x(i−1)(j+1), &Dgr;x(j−1), &Dgr;xi(j+1), &Dgr;x(i+1)(j−1), &Dgr;x(i+1)j, and &Dgr;x(i+1)(j+1) in &Dgr;X12 in a step 610 to determine whether they are all equal or very close to zero. For the analysis in step 610, the difference matrix &Dgr;X12 is reduced for the non-zero element &Dgr;xij to the following matrix [&Dgr;xij]. 2 [ Δ ⁢   ⁢ x ij ] = [ Δ ⁢   ⁢ x ( i - 1 ) ⁢ ( j - 1 ) Δ ⁢   ⁢ x ( i - 1 ) ⁢ j Δ ⁢   ⁢ x ( i - 1 ) ⁢ ( j + 1 ) Δ ⁢   ⁢ x i ⁡ ( j - 1 ) Δ ⁢   ⁢ x ij Δ ⁢   ⁢ x i ⁡ ( j + 1 ) Δ ⁢   ⁢ x ( i + 1 ) ⁢ ( j - 1 ) Δ ⁢   ⁢ x ( i + 1 ) ⁢ j Δ ⁢   ⁢ x ( i + 1 ) ⁢ ( j + 1 ) ] ( 11 )

[0058] If the destination transceiver 26 determines in step 610 that no, the adjacent elements in the reduced matrix [&Dgr;xij] are not also non-zero, then element xij in frame XI is identified as a bad element in a step 615 and not an element in change, but as truly part of the stationary background or an inside part of an object. The value for &Dgr;xij in &Dgr;X12 is also updated accordingly in step 615, and the method continues to search the difference matrix for the next non-zero element by returning to step 605.

[0059] If the destination transceiver 26 determines in step 610 that yes, the adjacent elements are also non-zero, then the element xij in frame X1 is identified as an element in change. This element in change may be located on or close to the border of a moving object, a changing part of the stationary background, or an inside part of a rotating object. Upon locating a first true element in change (&Dgr;xij in &Dgr;X12), the element is analyzed in a step 620 to determine whether it is on the border of, or a part of a moving or rotating object, or a changing part of a stationary object.

[0060] If the element is on the border of a changing, moving and/or rotating object, then there must be at least two adjacent elements of like change, and at least one adjacent element of unlike change, thus forming part of a connecting border outlining the object. If the object is only spinning or stationary and changing it will be found in the same position in &Dgr;X12, &Dgr;X13, and &Dgr;X14. If the object is moving, then it will move predictively between frames and its position in a subsequent frame can be determined with reasonable accuracy. Thus, the destination transceiver 26 determines in a step 620 whether there are at least two adjacent elements of like motion change and at least one adjacent element of unlike motion change for the non-zero element. For example, reduced difference matrix elements &Dgr;x(i)(j+1) and &Dgr;x(i+1)(j−1) may have the same value as the non-zero element &Dgr;xij, and reduced difference matrix element &Dgr;x(i)(j−1) may have a different value than the non-zero element &Dgr;xij. In one embodiment, if the destination transceiver 26 determines in step 620 that no, there are not at least two adjacent elements of like change and at least one adjacent element of unlike change for the non-zero element &Dgr;xij, then the element is identified as inside an object in a step 625 and the method returns to step 605. Alternately, if the destination transceiver 26 determines in step 620 that yes, there are at least two adjacent elements of like change and at least one adjacent element of unlike change for the non-zero element &Dgr;xij, then the element is assumed to be a border element of an object in a step 630.

[0061] In one embodiment, the non-zero element &Dgr;xij is not assumed to be on the border of an object in step 630, and the destination transceiver 26 performs an additional analysis of the reduced matrix [&Dgr;xij] to determine whether the element is a changing element inside the border of an object. For example, a solid colored moving or rotating object may not have any non-zero difference matrix values (&Dgr;x) except on the border of the object. The non-zero element &Dgr;xij is identified as a changing element inside the border of a changing object if it has at least six adjacent elements of like change. This determination may be made in step 625.

[0062] Still referring to FIG. 7A, following steps 625 and 630 wherein a non-zero element is identified as either inside a changing or rotating object or as a border element of an object, the method 600 proceeds to a step 635 where the destination transceiver determines whether non-zero elements remain in the difference matrix that have not been analyzed. If unchecked non-zero elements remain in the difference matrix, then the method 600 returns to step 605. If no non-zero elements remain in the difference matrix, then the method proceeds to a step 645, illustrated in FIG. 7B.

[0063] In step 645, the destination transceiver 26 determines whether a border element is on the border of the object as it appears in the first frame or on the border of the object as it appears in the second frame of the difference matrix comparison. For example, if the border element &Dgr;xij is from the difference matrix &Dgr;X12, the method determines whether the corresponding border element xij is on the border of the object as the object appears in Frame 1, or on the border of the object as the object appears in Frame 2. If the destination transceiver determines in step 645 that yes, the border element xij is on the border of the object in the first frame of the difference matrix comparison (Frame 1 in the current example), then the element is stored as a border element of a changing, moving, and/or rotating object in a step 650. If the destination transceiver determines in step 645 that no, the border element xij is not on the border of the object in the first frame, but on the border of the object in the second frame of the difference matrix comparison (Frame 2 in the current example), then it is discarded in a step 655. Following steps 645 and 655, the method 600 proceeds to a step 660, wherein the destination transceiver 26 determines whether border elements remain in the difference matrix that have not been checked in step 645. If the destination transceiver 26 determines in step 660 that yes, unchecked border elements remain in the difference matrix, then the method 600 returns to step 645. If the destination transceiver 26 determines in step 660 that no, there are no unchecked border elements remaining in the difference matrix, then the method 600 proceeds to a step 665.

[0064] Where the determination in step 645 is no, the border element xij is not on the border of the object in the first frame, then the destination transceiver 26 assumes the object is moving from left to right between the frames in the difference matrix comparison. For example, where an object is moving from left to right between Frame 1 and Frame 2, and where the destination transceiver 26 scans the difference matrix from left to right, the first border element for the object identified in the difference matrix &Dgr;X12 will not be on the border of the object as the object appears in Frame 1, but on the object as it appears in Frame 2. Thus, in one embodiment, where the destination transceiver 26 determines that no, the border element xij is not on the border of the object in the first frame, the destination transceiver 26 repeats steps 645 through 660 by scanning the difference matrix from the bottom up, right to left. Thereby, the destination transceiver 26 can identify the location of the border element of the object in the first frame of the difference matrix comparison.

[0065] B. Outlining an Object

[0066] Once all of the border elements of a single object in the first frame of the difference matrix comparison are located by repeating steps 645 through 660, the object is outlined in step 665. In step 665, the object in the first frame of the difference matrix comparison is outlined by linking together adjacent related elements by identifying like changes in color on the border of the object. In one embodiment, elements xij in each frame matrix XN are expressed in the form of a digital number corresponding to the levels of red, green, and blue for a given pixel. A difference matrix element &Dgr;xij therefore provides information as to whether a pixel is changing color between frames by providing a mathematical comparison of a matrix element value in a first frame and a second frame. Thus, difference matrix elements &Dgr;xij having similar values indicate like changes in color, and adjacent difference matrix elements indicating like changes in color designate border elements on a single object.

[0067] An exemplary method 665 of outlining an object is illustrated in FIG. 8, where the method begins at a step 700. In a step 705, the destination transceiver 26 mathematically compares the value of a difference matrix element &Dgr;xij, identified as a border element, to an adjacent difference matrix element also identified as a border element. In a step 710, the destination transceiver 26 determines whether the comparison yields a difference of 5% or less, for example. If the destination transceiver 26 determines in step 710 that yes, the comparison yields a difference of 5% or less, then the adjacent element is identified as a related element and as part of the outline of a single object in a step 715. Following step 715, the destination transceiver 26 determines whether the outline of the object is complete in a step 720. If the destination transceiver determines in step 720 that yes, the outline of the object is complete, then the method 640 proceeds to an end step 725. If the destination transceiver determines in step 720 that no, the outline of the object is not complete, the method returns to step 705. If the destination transceiver 26 determines in step 710 that no, the comparison does not yield a difference of 5% or less, then the adjacent element is identified as not a related element on the outline of a single object in a step 730. Following step 730, the method returns to step 705. As will be appreciated by one skilled in the art, the difference threshold for identifying related elements may be greater than or less than 5%, such as 1%, 2%, 3%, 4%, 6%, 7%, 8%, 9%, and 10%, and the 5% threshold is used for illustrative purposes only.

[0068] C. Identifying and Positioning the Outlined Object in Transmitted Frames

[0069] Referring again to FIG. 7B, following the outlining of the object in step 665, elements on the outlined object are used to identify the position of the object in each of the transmitted video frames X1, X2, X3 and X4. Specifically, following step 665 the method 600 proceeds to a step 670, wherein four elements along the border and on opposite sides of the object are selected and used to identify and position the object in each frame X1, X2, X3 and X4 according to color and position information. In one embodiment, the first element is the first border element of the object located when scanning the difference matrices. In one embodiment, these chosen elements are on opposite sides of the object, wherein lines drawn connecting these elements around the perimeter of the object form a rectangle, and more preferably a square. The chosen elements define the pixel locations of where the object is placed in respective frames when positioning the objects in the output frames.

[0070] A k value is also assigned to the object in step 645 for storage of the object in memory. The k value assigned to an object corresponds to the order in which the object is identified (1 to 100,000, for example), and an object retains the same k value in each frame. Once an object is outlined, identified and positioned in the video frames, steps 605 through 645 are repeated for all remaining elements in the difference matrices &Dgr;X12, &Dgr;X23, and &Dgr;X34 in a step 650.

[0071] VI. Example of Moving Ball

[0072] One embodiment of a method of identifying an object and its motion will be discussed in an exemplary application manner. This example comprises a moving ball 70, where the ball is possibly in a different position in each frame of a series of frames. FIG. 9A illustrates four frames (Frame 1, Frame 2, Frame 3, and Frame 4) of a series of captured frames, where the ball 70 is in a different position in each of the four frames. Each illustrated frame is the first frame captured in a 30 frame per second video capture sequence. Thus, the time between frames illustrated in sequence is 60 seconds.

[0073] In the present example, the videoconferencing standard Quarter Common Intermediate Format (QCIF) is used where each frame comprises a 144 by 176 pixel grid, comprising 144 rows (i=1, 2, 3, . . . 144) and 176 columns 0=1, 2, 3, . . . 176) of pixels. The corresponding difference matrix XN for each frame (Frame 1, Frame 2, Frame 3, and Frame 4) is defined as follows in equation (12). Each element xij of the matrix has corresponding location information identified by a subscript, where “i” corresponds to the row in which the element is located in the matrix, and “j” corresponds the column in which the element is located in the matrix. 3 X N = [ x 11 x 12 x 13 … x 1 ⁢ _ ⁢ 176 x 21 x 22 x 23 … x 2 ⁢ _ ⁢ 176 x 31 x 32 x 33 … x 3 ⁢ _ ⁢ 176 … … … … … x 144 ⁢ _ ⁢ 1 x 144 ⁢ _ ⁢ 2 x 144 ⁢ _ ⁢ 3 … x 144 ⁢ _ ⁢ 176 ] ( 12 )

[0074] Furthermore, each pixel has corresponding color information expressed as an eight bit binary number, one byte long. The color information comprises red, green, and blue components, where the first two bits of the eight bit number correspond to blue components, the next three bits correspond to green components, and the last three bits correspond to red components as follows: 1 Pixel Bit 1 2 3 4 5 6 7 8 color b b g g g r r r component

[0075] The value of each element xij in the frame matrix XN is the eight bit binary number representing the color information for the pixel corresponding to the element xij in the frame matrix XN. In this example, the ball 70 is undergoing two types of motion: translation and spinning.

[0076] As illustrated in FIG. 9A, the ball 70 moves across a light blue background and the ball is a single solid color. The background and ball colors are defined as follows in eight-bit color form for each pixel: 2 Background Color bit number 1 2 3 4 5 6 7 8 color b b g g g r r r component bit value 0 1 0 0 1 0 0 1 Ball Color bit number 1 2 3 4 5 6 7 8 color b b g g g r r r component bit value 1 1 0 1 1 0 1 1

[0077] FIG. 9B shows frame representations of the following difference matrices: &Dgr;X12, &Dgr;X23, &Dgr;X34, &Dgr;X13, &Dgr;X14. For the moving ball 70, the difference matrices (&Dgr;X's) are color differences between frames. If any points are substantially the same between frames (have substantially the same color information) they will be zeroed out, but if the background is subtracted from an object, or the object is subtracted from the background, differences in color result and are identified in the difference matrices.

[0078] A. Stationary Elements

[0079] The destination transceiver 26 identifies elements that are not changing between the four frames as stationary elements by calculating the difference matrices &Dgr;X12, &Dgr;X23, and &Dgr;X34, where all elements not on the ball zero out. For example, a frame matrix element xaa, corresponding to pixel a in the background in Frame 1, has a frame matrix X1 value of 01001001 (pixel value for background color). The same frame matrix element xaa corresponding to pixel a in Frame 2 has the same pixel value of 01001001 in frame matrix X2. Therefore, the difference matrix element &Dgr;xaa in the difference matrix &Dgr;X12 is zero, where the frame matrix element xaa has the same pixel value in both frame matrix X1 and frame matrix X2.

[0080] Similar to elements in the background, all elements on the ball in both Frame 1 and Frame 2 also zero out in the difference matrix &Dgr;X12. An element xbb, corresponding to pixel b on the ball in Frame 1, has a frame matrix X1 value of 11011011 (pixel value for ball color), and the same frame matrix element xbb corresponding to pixel b in Frame 2 has a frame matrix X1 value of 11011011. Therefore, the difference matrix value &Dgr;xaa in the difference matrix &Dgr;X12 is zero, where the frame matrix element xbb has the same pixel value in both frame matrix X1 and frame matrix X2.

[0081] B. Changing Elements

[0082] All elements on the ball in Frame 1 and on the background in Frame 2 will have a frame matrix value of 11011011 (ball color) in frame matrix X1 and a frame matrix value of 01001001 (background color) in frame matrix X2. For example, a frame matrix element xcc, corresponding to a pixel c in Frame 1 and Frame 2, will have a frame matrix value of 11011011 in frame matrix X1 and a frame matrix value of 01001001 in frame matrix X2. The difference matrix value &Dgr;xcc in the difference matrix &Dgr;X12 (X1-X2) is the difference between the frame matrix value for element xcc in frame matrix X1 and the frame matrix value for element xcc in frame matrix X2. Thus, the difference matrix value &Dgr;xcc in the difference matrix &Dgr;X12 is 11011011−01001001=10010010.

[0083] In contrast, all elements on the ball in Frame 2 and on the background in Frame 1 will have a frame matrix value of 01001001 in frame matrix X1, and a frame matrix value of 11011011 in frame matrix X2. For example, a frame matrix element xdd, corresponding to a pixel d in Frame 1 and Frame 2, will have a frame matrix value of 01001001 in frame matrix X1 and a frame matrix value of 11011011 in frame matrix X2. The difference matrix value &Dgr;xdd in the difference matrix &Dgr;x12 (X1-X2) is the difference between the frame matrix value for element xdd in frame matrix X1 and the frame matrix value for element xdd in frame matrix X2. Therefore, the difference matrix value &Dgr;xdd in difference matrix &Dgr;X12 is 01001001−11011011=11111111111111111111111101101110, which is a negative number expressed in binary form in more than eight bits. Since a difference matrix value in the current example may only be eight bits, and an overflow condition may not be used, the opposite difference matrix value &Dgr;X21 (X2-X1) is calculated to obtain the value 10010010 for difference matrix element &Dgr;xdd and a flag is added in the associated memory for these bits. Specifically, a parity bit is used as a flag, wherein if the difference matrix value for an element is negative, the parity bit is set to one (1), and if the difference matrix value is positive, the parity bit is set to zero.

[0084] In one embodiment, the parity bit information is used to determine the direction of movement of an object. For example, where the destination transceiver 26 scans a difference matrix from the top down and from left to right, if the parity bit is set to one (1) for the border elements at the top of the object's outline, the border of the object is in motion to the left and/or up. Conversely, if the parity bit is not set (zero) for the border elements at the top of the object's outline, the border of the object is in motion to the right and/or down.

[0085] All the matrix value subtractions for the remaining difference matrices &Dgr;X23 and &Dgr;X34 are performed in congruence with the above method. Then the object is located according to the method 600 illustrated in FIGS. 7A-B.

[0086] C. Object Identification

[0087] According to step 605, the first difference matrix &Dgr;X12 is scanned for the first non-zero element &Dgr;xij by searching the rows of the matrix left to right, starting from the top row of the matrix. 4 [ Δ ⁢   ⁢ x ij ] = [ Δ ⁢   ⁢ x ( i - 1 ) ⁢ ( j - 1 ) Δ ⁢   ⁢ x ( i - 1 ) ⁢ j Δ ⁢   ⁢ x ( i - 1 ) ⁢ ( j + 1 ) Δ ⁢   ⁢ x i ⁡ ( j - 1 ) Δ ⁢   ⁢ x ij Δ ⁢   ⁢ x i ⁡ ( j + 1 ) Δ ⁢   ⁢ x ( i + 1 ) ⁢ ( j - 1 ) Δ ⁢   ⁢ x ( i + 1 ) ⁢ j Δ ⁢   ⁢ x ( i + 1 ) ⁢ ( j + 1 ) ] = [ 0 0 0 0 10010010 0 10010010 10010010 10010010 ] ( 13 )

[0088] To verify that the first non-zero element &Dgr;xij is not just a bad element or noise, and it is an element in motion or change, the method determines in step 610 whether the adjacent or related elements &Dgr;x(i−1)(j−1), &Dgr;x(i−1)j, &Dgr;x(i−1)(j+1), &Dgr;xi(j−1), &Dgr;xi(j+1), &Dgr;x(i+1)(j−1), &Dgr;x(i+1)j, and &Dgr;x(i+1)(j+) in &Dgr;X12 are not all equal to zero. Specifically, the method will identify that three adjacent elements &Dgr;x(i+1)(j−1), &Dgr;x(i+1)j, and &Dgr;x(i+1)(j+1) are not equal to zero (&Dgr;x(i+1)(j−1)=&Dgr;x(i+1)j=&Dgr;x(i+1)(j+1)=10010010). Since all of the adjacent elements are not zero, then element xij is an element in change and may be on the border of a moving object, a changing part of the stationary background, or an inside part of a rotating object. However, if all of the adjacent elements &Dgr;x(i−1)(j−1), &Dgr;x(i−1)j, &Dgr;x(i−1)(j+1), &Dgr;xi(j−1), &Dgr;xi(j+1), &Dgr;x(i+1)(j−1), &Dgr;x(i+1)j, and &Dgr;x(i+1)(j+1) in &Dgr;X12 were substantially equal to zero, then the first non-zero element &Dgr;xij would be identified as a bad element in step 615 and the next non-zero element in the difference matrix &Dgr;X12 would be located in step 605.

[0089] If the ball 70 is only rotating or changing in color, it will be found in the same position in &Dgr;X12, &Dgr;X13, and &Dgr;X14. However, as shown in the frame representations of the difference matrices &Dgr;X12, &Dgr;X13, and &Dgr;X14 in FIG. 9B, the ball 70 is in translation and not just rotation or changing in color. In a step 620, the destination transceiver 26 determines whether the first non-zero element &Dgr;xij in the difference matrix &Dgr;X12 is on the border of a moving object. In reference to the reduced matrix for the first non-zero element &Dgr;xij, at least two adjacent elements have like motion change and at least one adjacent element has unlike motion change. Specifically, three adjacent elements &Dgr;x(i+1)(j−1), &Dgr;x(i+1)j, and &Dgr;x(i+1)(j+1) have like motion change (10010010) and five adjacent elements &Dgr;x(i−1)(j−1), &Dgr;x(i−1)j, &Dgr;x(i−1)(j+1), &Dgr;xi(j−1), and &Dgr;xi(j+1) have unlike motion change. Thus, as the destination transceiver 26 will determine in step 620 that yes, the first non-zero element is on the border of a moving object, then element &Dgr;xij is identified as a border element of an object in a step 630.

[0090] In the event the destination transceiver determines in step 620 that no, the non-zero element is not on the border of a moving object, then the destination transceiver 26 may determine that the non-zero element &Dgr;xij is inside an object. More particularly, if the destination transceiver 26 determines that there are at least six adjacent elements of like motion change, then the non-zero element &Dgr;xij is identified as an element inside a stationary changing or rotating object. For example, the non-zero element may be inside an object that is changing color or a flashing light. The non-zero element identified as inside a stationary changing or rotating object is stored in memory according to this identification and the destination transceiver continues to identify objects and their border elements according to the method 600 illustrated in FIGS. 7A-B.

[0091] Following the identification of a non-zero element as either inside a changing object in step 625 or as a border element in step 630, the destination transceiver 26 determines in step 635 whether elements remain in the difference matrices &Dgr;X12, &Dgr;X13, and &Dgr;X14 that have not been analyzed in steps 605 through 630. If the determination in step 635 is no, the method 600 returns to step 605 and the destination transceiver 26 locates the next non-zero element in the difference matrix.

[0092] D. Outlining an Object

[0093] The border elements identified in step 650 of the method 600 illustrated in FIG. 7B are used as starting elements to outline an associated object in the frame matrices X1, X2, X3 and X4. An outline of an object is defined by linking together adjacent related elements on the border of the object using the difference matrices &Dgr;X12, &Dgr;X23 and &Dgr;X34 and frame matrices X1, X2, X3 and X4.

[0094] We previously identified non-zero element &Dgr;xij=10010010 as a border element of an object, where adjacent elements &Dgr;x(1+1)(j−1), &Dgr;x(i+1)j, and &Dgr;x(i+1)(j+1) had like change (10010010). To verify that non-zero element &Dgr;xij is the beginning of an outline of an object, the difference matrix &Dgr;X12 is subsequently analyzed to verify the identified pattern of like change (&Dgr;x(i+1)(j−1)=&Dgr;x(i+1)j=&Dgr;x(i+1)(j+1)=10010010). The adjacent elements having like change and identified in this manner are linked together to establish the outline of an object. Four elements on opposite sides of the outline of an object are chosen, where the first element is the frame element xij corresponding to the first non-zero element &Dgr;xij. The other three elements are chosen such that lines drawn connecting the elements form an “X” or a box. The four chosen elements are used to define the pixel location of where the object is to be placed in the reconstructed intermediate frames.

[0095] After outlining each object using difference matrices &Dgr;X12, &Dgr;X23, and &Dgr;X34 and identifying four elements on the outlines of the objects, the four elements are used to position their corresponding object in the frame matrices X1, X2, X3 and X4. In one embodiment, the adjacent elements surrounding one of the four elements form a signature pattern in conjunction with the chosen element. This pattern may also be used to locate and position an object in a frame. Once an outlined object is positioned in the frame matrices, the object information is saved in memory by assigning a k value according to the order the object was found. Since this example only comprises one object, its k value is 1.

[0096] VII. Motion Equations for Object in Motion

[0097] As briefly described in Section II, after identifying and storing the outline of an object in memory, the object's motion equations are determined to reconstruct the intermediate frames (frames between each of the transmitted frames Frame 1, Frame 2, Frame 3, and Frame 4). The motion equations thereby enable reconstruction so as to supply a 30 frame per second output video to the display 34. In order to determine an object's motion equations, the type of motion an object is experiencing is first determined.

[0098] A. Determining Type of Motion

[0099] In order to determine an object's motion equations, the system first determines the type of motion the object is undergoing. An object's motion may be categorized into one or a combination of three types of motion: translation, rotation, and spinning. An element or pixel xij in frame matrix XN is defined in space by a three dimensional vector xij=p(x y k), where x=i, y=j, and k is the element's relative spatial plane. If an object is in translation, it is moving through space from one location a(x y k) to another location b(x y k). If an object is in rotation, it is rotating about a location c(x y k) external to the object. Finally, if an object is spinning, it is rotating about a location c(x y k) found within the object. Typically, a rotating or spinning object has at least one fixed element in the frame or matrix about which the object is rotating, translation has none. Pure translation moves all elements that make up an object an equal distance as the object moves from frame to frame.

[0100] By saving and examining the object outlined with a given k plane value and derived from frame matrices X1, X2, X3 and X4, the object may be evaluated to determine its type of motion. To determine whether an object is in translation, rotation, or both, at least two points on the outline of the object, defined in &Dgr;X12, &Dgr;X23, and &Dgr;X34, are followed through the frames X1, X2, X3 and X4. In one embodiment, the two elements are on opposite sides of the object.

[0101] Using the two elements, p1(x y k) and p2(X y k), on opposite sides of the chosen object, the object's motion through space may be determined. In the first frame X1, the two elements have locations p1(x y k) and p2(X y k), and on the second frame X2, the two elements of the same object are at the locations p′1(x′ y′ k) and p′2(x′ y′ k). The length of a line drawn between p1(x y k) and p2(x y k) is the same as the length of a line drawn between p′1(x′ y′ k) and p′2(x′ y′ k).

[0102] If the distance between p1 and p′1, calculated as {square root}{square root over ((&Dgr;x)2+(&Dgr;y)2)} counting pixels, is equal to the distance between p2 and P′2, then the object is in translation. If the distance between p1 and p′1 is not equal to the distance between p2 and p′2, then the object is in rotation with maybe some translation.

[0103] To determine whether the object is in both translation and rotation, the same process used to determine whether the object is in rotation is performed for the object in X2 and X3 to find the center c′(xc yc k) of the object's rotation in a subsequent frame. If c(xc yc k) is the same element as c′(xc yc k) then the object is in rotation only. If c(xc yc k) is not the same element as c′(xc yc k) then the object is in rotation and translation, and the translation is described by the line or vector from c(xc yc k) to c′(xc yc k).

[0104] As discussed above, the four points or elements on each object's outline are used to position the object in the reconstructed frames. The values for the rotation vector (r) and the translation vector (t) are used to move the object through the reconstructed frames at the destination transceiver.

[0105] To determine whether an object is spinning, at least two points, p1(x y k) and p2(X y k), are used to represent a color pattern on the object and the points are followed through the frames X1, X2, X3, and X4. In one embodiment, these two points are on opposite sides of the pattern. To determine whether an object is spinning, the object saved from the first frame X1 is scanned for color patterns that may be followed to analyze the object's movements. An object may move as much as 17 pixels in any direction between frames, using the QCIF (176×144) frame size. The object is scanned for dramatic changes in color, such as places where the red, green, or blue values change more than 15% across the object, and the size of the area of change with respect to the size of the object (area/object size). If this ratio, calculated using pixels, is greater than ⅓, the object is scanned further to find a smaller area of change (⅕ or smaller, for example).

[0106] Once a desirable area of change is located, the color pattern area, and its orientation on the object's k plane, are located on the other frames X2, X3 and X4. Four elements on the pattern area are then used to define the object's motion. Using two of the four elements (p1(x y k) and p2(x y k)) on opposite sides of the chosen pattern, the pattern's motion in the object's outline may be defined. The two points on the object's pattern in frame X1 are identified as p1(x y k) and p2(X y k), and the same two points on the object's pattern in frame X2 are identified as p′1(x′ y′ k) and p′2(x′ y′ k). Similar to the translation determination of an object, the length of a line drawn between p1(x y k) and p2(x y k) is by definition the same as the length of a line drawn between p′1(x′ y′ k) and p′2(x′ y′ k).

[0107] If there is no rotation between frames X1 and X2, the distance between p1 and p′1, calculated as {square root}{square root over ((&Dgr;x)2+(&Dgr;y)2)} counting pixels, is equal to the distance between p2 and p′2, and the pattern is in the same position on the object in all frames X1, X2, X3, and X4. Thus, the object is not spinning. If rotation of the pattern is apparent between frames X1 and X2, the distance between p1 and p′1 is not equal to the distance between p2 and p′2, and the object is spinning.

[0108] B. Determination of One or More Motion Equations

[0109] An element or pixel in matrix or frame XN identified as being on an object may be defined in space as xij=p(x y k), a three dimensional vector where x=i, y=j, and k is the object identifier. According to the element definition, a motion equation p′ for an element or pixel may be defined as a function of a rotation vector (r), a beginning element vector (p), and a translation vector (t) as follows:

p′=r×p+t  (14)

[0110] Where: 5 p ′ = end ⁢   ⁢ element ⁢   ⁢ vector ( 15 )   ⁢ = [ x ′ y ′ ]     ⁢ = [ x ′ = r x × x + Δ ⁢   ⁢ x y ′ = r y × y + Δ ⁢   ⁢ y ]     ⁢ = p ′ ⁡ ( x ′ ⁢   ⁢ y ′ ⁢   ⁢ k )   r = rotation ⁢   ⁢ vector = [ r x r y ] ( 16 ) p = beginning ⁢   ⁢ element ⁢   ⁢ vector = [ x y ] = p ⁡ ( x ⁢   ⁢ y ⁢   ⁢ k ) ( 17 ) t = translation ⁢   ⁢ vector = [ Δ ⁢   ⁢ x Δ ⁢   ⁢ y ] ( 18 )

[0111] The rotation vector may be further defined as follows 6 r = [ r x r y ] = [ ( x ′ - Δ ⁢   ⁢ x ) x ( y ′ - Δ ⁢   ⁢ y ) y ] = p ′ ( p + t ) ( 19 )

[0112] To find the center of rotation c(xc yc k), a straight line is drawn in the object's plane between p1 and p′1, wherein the length of the line is determined as {square root}{square root over ((&Dgr;x)2+(&Dgr;y)2)}, and the slope of the line is &Dgr;y/&Dgr;x=tan &thgr;. A line is also drawn between p2 and p′2 in the same manner. Perpendicular bisecting lines are then drawn from the middle (length/2) of these lines, wherein the junction of the two perpendicular bisecting lines is the three dimensional center c(xc yc k) of the object's rotation. A line drawn from this rotation center c(xc yc k) to p1 is equal in length to a line drawn from c(xc yc k) to p′1, and a line drawn from c(xc yc k) to p2 is equal in length to a line drawn from c(xc yc k) to p′2.

[0113] The rate of spin for the object may be determined using the rotation vector. The rate of spin provides information to place the object in the proper position in each intermediate frame of the video output. The rate of spin (R) for the object is defined as the magnitude of the rotation vector divided by the change in time: 7 R = &LeftBracketingBar; r &RightBracketingBar; Δ ⁢   ⁢ T = ( ( r x ) 2 + ( r y ) 2 ) Δ ⁢   ⁢ T ( 20 )

[0114] If the rate of spin for an object is a constant, then the magnitude of the rotation vectors |r12|, |r23|, and |r34| calculated using the difference matrices &Dgr;X12, &Dgr;X23, and &Dgr;X34 will provide approximately the same values. In the present embodiment, the change in time between frames &Dgr;T12=&Dgr;T23=&Dgr;T34 is one second, and therefore ⊕r|/&Dgr;T=|r|/sec, and the distance to rotate is defined by x′=rx×x and y′=ry*y, and may be divided evenly by 30 to obtain each outputted video frame position.

[0115] If the rate of spin (|r|/&Dgr;T) for an object is not a constant, then the magnitude of the rotation vectors |r12|, |r23|, and |r34| calculated using the difference matrices &Dgr;X12, &Dgr;X23, and &Dgr;X34 will not provide the same values (|r12|≠|r23|≠|r34|), and the object is either speeding up or slowing down in its spinning. In the present embodiment, the change in time between frames of the difference matrices &Dgr;T12=&Dgr;T23=&Dgr;T34 is one second, and therefore |r12|/&Dgr;T12=|r12 |/sec, |r23|/&Dgr;T23=|r23|/sec, and |r34|/&Dgr;T34=|r34|/sec. The distance to rotate as defined by x′=rx*x and y′=ry*y cannot be divided evenly by 30 to obtain each frame position. To determine the movement of this object that does not have a constant rate of spin, the system determines the acceleration or deceleration of the spinning object. The determination of the rotation vector for the accelerating spinning object is discussed in further detail hereinafter below with respect to the rotation element of the acceleration vector for a moving object.

[0116] The motion equations for an object include an acceleration or deceleration component to account for the speeding up or slowing down of the object's motion. If an object's acceleration is linear, then the object is speeding up or slowing down at a definable rate as it goes through the frames. An object's linear acceleration may be calculated using the object's image that has been stored from X1, X2, X3 and X4.

[0117] An acceleration vector a has both a translation and a rotation component. To determine the translation component, the linear distance |t| an element moves from p and p′, as discussed above, is calculated as {square root}{square root over ((&Dgr;x)2+(&Dgr;y)2)}. If the object is accelerating or decelerating linearly then the following relationship is true:

|t12−t23|=|t23−t34|≠0  (21)

[0118] If the object is accelerating non linearly, then the following relationship is true:

|t12−t23|≠t23−t34|≠0  (22)

[0119] In the present example, non-linear acceleration or deceleration are not considered. The translation component of the linear acceleration vector at14 is defined as follows for linear acceleration, where the acceleration is taken over a two second time period:

at14=(t12−t23)+(t23−t34)=at13+at24  (23)

[0120] If the object is accelerating or decelerating linearly then at13=at24 and a new vector tc is defined as follows and is a constant for all frames:

tc=t12−t23=t23−t34  (24)

[0121] Thus, the translation vector t may be redefined as follows using the acceleration vector: 8 t = a t13 × t c = [ a tx a ty ] × [ t cx t cy ] = [ a tx × t cx a ty × t cy ] ( 25 )

[0122] In the present example, one frame is sent every second and the acceleration calculations above using the distance components calculate acceleration over a period of two seconds, then the rate of acceleration per frame is as follows: 9 &LeftBracketingBar; a 13 &RightBracketingBar; 60 ⁢   ⁢ sec = &LeftBracketingBar; a 24 &RightBracketingBar; 60 ⁢   ⁢ sec ( 26 )

[0123] The (x, y) components for the acceleration are determined as follows: 10 a t13 = t 12 - t 23 = [ a tx a ty ] = [ Δ ⁢   ⁢ x 12 - Δ ⁢   ⁢ x 23 Δ ⁢   ⁢ y 12 - Δ ⁢   ⁢ y 23 ] ( 27 )

[0124] The acceleration multiplier at for the object in each new frame is thus: 11 a t = [ &LeftBracketingBar; a tx &RightBracketingBar; 60 &LeftBracketingBar; a ty &RightBracketingBar; 60 ] ( 28 )

[0125] Furthermore, the translation vector t for each newly created frame is determined as follows: 12 t = [ a tx × t cx 60 a ty × t cy 60 ] ( 29 )

[0126] The next determination is for the rotation component of the acceleration vector. If the object is accelerating or decelerating linearly then the following relationship is true:

|r12−t23|=|r23−r34|≠0  (30)

[0127] If the object is accelerating non linearly, then the following relationship is true:

|r12−r23|≠|r23≠r34|0  (31)

[0128] In the present example, non-linear acceleration or deceleration are not considered. The rotational component of the linear acceleration vector ar14 is defined as follows for linear acceleration, where the acceleration is taken over a two second time period:

ar14=(r12−r23)+(r23−r34)=ar13+ar24  (32)

[0129] If the object is accelerating or decelerating linearly then ar13=ar24 and a new vector rc is defined as follows and is a constant for all frames:

rc=r23−r23=r23r34  (33)

[0130] Thus, the translation vector r may be redefined as follows using the acceleration vector: 13 r = a r 13 × r c = [ a rx a ry ] × [ r cx r cy ] = [ a rx × r cx a ry × r cy ] ( 34 )

[0131] In the present example, one frame is sent every second and the acceleration calculations above using the distance components calculate acceleration over a period of two seconds, then the rate of acceleration per frame is as follows: 14 &LeftBracketingBar; a 13 &RightBracketingBar; 60 ⁢   ⁢ sec = &LeftBracketingBar; a 24 &RightBracketingBar; 60 ⁢   ⁢ sec ( 35 )

[0132] The (x, y) components for the acceleration are determined as follows: 15 a r ⁢   ⁢ 13 = r 12 - r 23 = [ a rx a ry ] = [ ( x 2 - Δ ⁢   ⁢ x 12 ) x 1 - ( x 3 - Δ ⁢   ⁢ x 23 ) x 2 ( y 2 - Δ ⁢   ⁢ y 12 ) y 1 - ( y 3 - Δ ⁢   ⁢ y 23 ) y 2 ] ( 36 )

[0133] The acceleration multiplier ar for the object in each new frame is thus: 16 a r = [ &LeftBracketingBar; a rx &RightBracketingBar; 60 &LeftBracketingBar; a ry &RightBracketingBar; 60 ] ( 37 )

[0134] Furthermore, the translation vector t for each newly created frame is determined as follows: 17 r = [ a rx × r cx 60 a ry × r cy 60 ] ( 38 )

[0135] Thus, having the translation and rotation vectors determined incorporating acceleration of the object, the end element vector p′ for each newly created frame is defined as follows: 18 p ′ ⁡ ( x ′ ⁢ y ′ ⁢ k ) = [ x ′ y ′ ] = [ ( a rx × r cx × x 60 ) + ( a tx × t cx 60 ) ( a ry × r cy × y 60 ) + ( a ty × t cy 60 ) ] ( 39 )

[0136] Thus, if an object is moving through frames X1, X2, X3 and X4, four points on the outline of the object, located by comparing the difference frames &Dgr;X12, &Dgr;X23, and &Dgr;X34 with the object in X1, X2, X3 and X4, are used to move the image of the object across the newly created background frames for video output. This movement is defined by the motion equation and the four points defined on the outline of the object.

[0137] For an object undergoing only linear movement, the position of a point pn′ (xn′ yn′ k) on an object in a reconstructed intermediate frame number n is determined according to the following equation, where x and y are the starting coordinates of the point on the object in a first transmitted frame, and n is an integer from 1 to 29 corresponding to the intermediate frame being created (e.g., n=1 for 1st intermediate frame): 19 p n ′ ⁡ ( x n ′ ⁢ y n ′ ⁢ k ) = [ x ′ y ′ ] = [ ( a rx × r cx × x 60 ) + ( a tx × t cx 60 ) × n ( a ry × r cy × y 60 ) + ( a ty × t cy 60 ) × n ] ( 40 )

[0138] If an object is spinning inside its defined outline in a frame X, four points on a pattern located on the object are used to rotate the image of the object to be placed on the newly created background frames for video output, wherein the spinning or rotation of the object is defined by the rotation vector r. The denominator value “60” reflects the number of frames to be created between Frame 1 and Frame 3. Thus, the denominator value “60” may be changed depending on the number of frames to be created between transmitted frames, e.g., Frame 1 and Frame 3.

[0139] VIII. Example of Motion Equation Determination for Moving Ball

[0140] FIG. 10A is a more detailed illustration of the four video frames of FIG. 9A, and FIG. 10B is a more detailed illustration of the frame representations of FIG. 9B, wherein FIGS. 10A-B illustrate identifying points x and x′ for use in determining the motion equation for the ball 70.

[0141] The motion equation p′ for point x on the ball 70 is defined as a function of a rotation vector (r), a beginning element vector (p), and a translation vector (t) as follows:

p′=r×p+t  (14)

[0142] In the frame matrix for Frame 1 as illustrated in FIG. 10A, point x is located at element x52,64 and in the frame matrix for Frame 2, point x is located at element x46,52. As previously discussed in Section VI, there is only one object for this example and therefore the k value for the object is 1. Using the element information for point x in the frame matrices for Frame and Frame 2, the motion equation and corresponding vectors are determined as follows: 20 p = beginning ⁢   ⁢ element ⁢   ⁢ vector = [ 52 64 ] = p ⁡ ( 52 , 64 , 1 ) ( 17 ) p ′ = end ⁢   ⁢ element ⁢   ⁢ vector = [ 46 52 ] = [ 46 = r x × 52 + ( 52 - 46 ) 52 = r y × 64 + ( 64 - 52 ) ] ⁢ ⁢   = p ′ ⁡ ( 46 , 52 , 1 ) ( 15 ) t = translation ⁢   ⁢ vector = [ Δ ⁢   ⁢ x Δ ⁢   ⁢ y ] = [ ( 52 - 46 ) ( 64 - 52 ) ] = [ 6 12 ] ( 18 ) r = rotation ⁢   ⁢ vector = [ r x r y ] = [ ( x ′ - Δ ⁢   ⁢ x ) x ( y ′ - Δ ⁢   ⁢ y ) y ] ⁢ ⁢   = [ ( 46 - 6 ) / 52 ( 52 - 12 ) / 64 ] = [ 40 40 ] ( 16 )

[0143] The motion equations for an object include an acceleration or deceleration component to account for the speeding up or slowing down of the object's motion. If an object's acceleration is linear, then the object is speeding up or slowing down at a definable rate as it goes through the frames. An object's linear acceleration may be calculated using the object's image that has been stored for frame matrices X1, X2, X3 and X4.

[0144] For point x on the ball 70, the linear distance |t| the element moves between p and p′, as discussed above, is calculated as {square root}{square root over ((&Dgr;x)2+(&Dgr;y)2)}. Thus, the following linear distances can be calculated, where point x is located at element x84,38 in the frame matrix for Frame 3, and element x98,64 in the frame matrix for Frame 4:

|t12|={square root}{square root over ((52−46)2+(64−52)2)}={square root}{square root over ((6)2+(12)2)}=13.4

|t23|={square root}{square root over ((46−84)2+(52−38)2)}={square root}{square root over ((38)2+(14)2)}=40.5

|t34|={square root}{square root over ((84−98)2+(38−64)2)}={square root}{square root over ((−14)2+(−26)2)}=29.5

[0145] The translation component of the linear acceleration vector at14 is defined as follows for linear acceleration, where the acceleration is taken over a two second time period:

ar14=(t12−t23)+(t23−t34)=at13+at24  (23)

[0146] Where 21 t 12 = [ 6 12 ] , t 23 = [ - 38 14 ] , and t 34 = [ - 14 26 ] ,

[0147] and the acceleration vectors at13 and at14 are determined as follows: 22 a t13 = t 12 - t 23 = [ 44 2 ] ⁢   ⁢ and ⁢   ⁢ a t24 = t 23 - t 34 = [ - 24 - 12 ] ( 27 )

[0148] In this example, the destination transceiver 26 is only considering the linear movement of the ball 70. Therefore, the position pn′(xn′ yn′ k) of the point x on the ball as it appears in a reconstructed intermediate frame is determined according to the following equation, where, to find the position of the point x in the fifteenth frame, for example, the variable n is replaced with 15: 23 p n ′ ⁡ ( x n ′ ⁢ y n ′ ⁢ k ) = [ x ′ y ′ ] = [ ( a rx × r cx × x 60 ) + ( a tx × t cx 60 ) × n ( a ry × r cy × y 60 ) + ( a ty × t cy 60 ) × n ] ( 40 )

[0149] IX. Stationary Changing Objects

[0150] If an object has no motion associated with it, that is the object is in the same position in the difference matrices &Dgr;X12, &Dgr;X13, and &Dgr;X14), the object is identified as a stationary changing object. The rate of change of the object may be defined as &Dgr;X/&Dgr;T. The object's border elements (outline) are used to find the physical object in the frames X1, X2, X3 and X4. The object's rate of change may be determined as follows using the difference matrices and associated changes in time: 24 Δ ⁢   ⁢ X Δ ⁢   ⁢ T = ( Δ ⁢   ⁢ X 12 Δ ⁢   ⁢ T 12 + Δ ⁢   ⁢ X 23 Δ ⁢   ⁢ T 23 + Δ ⁢   ⁢ X 34 Δ ⁢   ⁢ T 34 + Δ ⁢   ⁢ X 13 Δ ⁢   ⁢ T 13 + Δ ⁢   ⁢ X 14 Δ ⁢   ⁢ T 14 ) 5 ( 40 )

[0151] A stationary changing object's border, or outline, is made up of non-moving elements of like rate of change (&Dgr;X/&Dgr;T), wherein an element on the border of a stationary object has at least two adjacent elements of unlike rate of change. By identifying the elements having adjacent elements of unlike rate of change, the destination transceiver 26 forms a connecting border outlining the object using the identified border elements.

[0152] In one embodiment, as discussed above, an eight bit pixel (xij) is used in a QCIF frame having a size of 144×176 pixels. Each pixel or element of an image or frame represents a color, and has a canonical minimum value and a canonical maximum value. In the defined space of each frame, anything that is not dynamically changing is ignored by the system. The system evaluates and defines colorization changes within the defined objects, keeping the information for each object separate and linked to their associated frames.

[0153] The variables defined for use in the system are pixel color, pixel frame positioning, and frame differences over time. The goal of the system is to understand what the image of an object in frames X1, X2, X3 and X4 is doing in terms of color as it goes through these frames.

[0154] As will be appreciated by one skilled in the art, chroma and luma may be used instead of the red, green, and blue (RGB) values to characterize the pixel values. In one embodiment, YCrCb or YUV represents each color by a luma component “Y” and two components of chroma Cr (or V) and Cb (or U). The luma component is related to “brightness” or “luminance,” and the chroma components make up a quantity related to “hue.” These components are defined rigorously in ITU-R BT.601-4 (also known as Rec. 601 and formerly CCIR 601). When referring to the chroma components, it may be advantageous to use Cr and Cb rather than V and U because the analog NTSC video specification ANSI/SMPTE 170M uses V and U with a slightly different meaning.

[0155] In one example, the RGB components of a pixel have values in the range of 0 to 255, and the conversion equations for the chroma and luma components YCrCb or YUV are as follows:

Y=0.257r+0.504g+0.098b+16

Cr or V=0.439r −0.368g−0.071b+128

Cb or U=−0.148r−0.291g+0.439b+128

[0156] As an object moves through a series of video frames it may go through a pattern of color and/or shade changes. To best understand the behavior of an object, all 30 frames per second would need to be captured and evaluated. However, the system only transmits one out of every thirty frames, and the destination transceiver 26 may query the source transceiver 24 regarding information or behavior of an object that is not clear from the received frames.

[0157] X. Frame Reconstruction

[0158] FIGS. 12A-D illustrate one embodiment of a method of reconstructing a video stream at the destination transceiver 26 using transmitted frames Frame 1, Frame 2, Frame 3, and Frame 4 as illustrated in FIG. 3, and the background and object information determined above. First, all stationary background information received from the source transceiver 24 is shifted evenly across a frame buffer between transmitted frames Frame 1, Frame 2, Frame 3, and Frame 4, thus creating the background for the intermediate frames from Frame 1 through Frame 4. Following creation of the background, development of the full intermediate frames is initiated where object information from the destination transceiver 26 is used to position objects on to the background frames in a working buffer at the destination transceiver 26. The frame buffer is an area in memory where the image frames are being reconstructed and is the buffer that will be shifted out (FIFO) to the video display 34 for display. The frame buffer is part of the working buffer, and in one embodiment, the working buffer is the memory where calculations, objects, object outlines, location information, etc. is stored and used.

[0159] As previously discussed, a four-frame comparison method may be used to define and locate objects in intermediate frames. As illustrated in FIG. 12A, information from Frame 1, Frame 2, Frame 3, and Frame 4 is used to identify and position objects in the intermediate frame between Frame 1 and Frame 2, which will be referred to hereinafter as Intermediate Frame 1. Similarly, as illustrated in FIG. 12B, information from Frame 1, Intermediate Frame 1, Frame 2, and Frame 3 is used to identify and position objects in the intermediate frame between Frame 1 and Intermediate Frame 1, which will be referred to hereinafter as Intermediate Frame 2. Next, as illustrated in FIG. 12C, information from Frame 1, Intermediate Frame 1, Frame 2, and Frame 3 is used to identify and position objects in the intermediate frame between Intermediate Frame 1 and Frame 2, which will be referred to hereinafter as Intermediate Frame 3.

[0160] More particularly, the motion equations determined according to the description in section VII are used in conjunction with time information to determine the location of the chosen four points on an object as they are located in an intermediate frame. For example, if an object it determined to be moving linearly between Frame 1 and Frame 2, and the time between Frame 1 and Frame 2 is 30 seconds, then the position of the object in Intermediate Frame 1 is determined by dividing the distance traveled between Frame 1 and Frame 2 in half, or using the knowledge that for Intermediate Frame 1, fifteen (15) seconds have passed since the object was located as identified in Frame 1.

[0161] The system continues to identify and position objects on the intermediate frames between Frame 1, Intermediate Frame 1, Intermediate Frame 2, Intermediate Frame 3, and Frame 2 until all 29 frames between Frame 1 and Frame 2 are completed as illustrated in FIG. 12D. Once the background and objects have been positioned on a sequence of 29 frames following one of the transmitted frames, the frame sequence, including transmitted frames Frame 1 and Frame 2, is transmitted to the frame buffer for display on the display 34.

[0162] Following completion of the sequence of intermediate frames between Frame 1 and Frame 2, a new transmitted frame is used as Frame 4 (frame matrix X4) to reconstruct the intermediate frames between old Frame 2 and Frame 3 in the same manner the intermediate frames between old Frame 1 and Frame 2 were constructed. Thus, old Frame 2 is now Frame 1, old Frame 3 is now used as Frame 2, and old Frame 4 is now used as Frame 3 to reconstruct the next sequence of frames for display on the display 34. The existing background frames created are verified and validated, wherein if something has changed dramatically, the destination transceiver 26 queries the source transceiver 24 to verify the change. If the change is not validated by the source transceiver 24, then the destination transceiver either requests a new frame, or, if there is a transmission problem, the destination transceiver 26 assumes the new frame is bad and uses the background information already determined. Following verification and validation of the background of the intermediate frames, the objects are positioned on the intermediate frames between new Frame 1 and new Frame 2 in the working buffer. Once all of the intermediate frames have been reconstructed in the working buffer, new Frame 1, the reconstructed intermediate frames, and new Frame 2 are transmitted to the display 34. This process of reconstruction of intermediate frames and subsequent transmission to the display 34 is repeated for the remainder of the sequence of transmitted frames received at the destination transceiver 26 such that the frames displayed at the display 34 is the continuous video stream 110 illustrated in FIG. 4, and consists of 30 frames per second.

[0163] An exemplary source transceiver circuit 1200 is illustrated in the block diagram of FIG. 12. The source transceiver circuit 1200 comprises a video buffer 1202 configured to receive a video stream from the video source 22. The video buffer is coupled to programmable math and distribution logic circuitry 1204, which is coupled to a working memory 1206. The programmable math and distribution logic circuitry 1204 is coupled to a dynamic output buffer 1208, which outputs one frame per second to a data multiplexer and compression circuit 1210. The data multiplexer and compression circuit 1210 is coupled to transceiver circuitry, which can include both transmit circuitry and receive circuitry. The data multiplexer and compression circuitry 1210 is also coupled to the programmable math and distribution logic circuitry 1204 such that the source transceiver 24 can retrieve frame information in response to a request from the destination transceiver 26.

[0164] Similar to the source transceiver circuit 1200, an exemplary destination transceiver circuit 1300 is illustrated in FIG. 13. The destination transceiver circuit 1300 comprises a data multiplexer and compression circuit 1302 configured to receive the one frame per second video input from the source transceiver 24. A dynamic input buffer 1304 is coupled to the data multiplexer and compression circuit 1302 and is configured to shift the one frame per second into programmable math and distribution logic circuitry 1306. The programmable math and distribution logic circuitry 1306 is coupled to a working memory 1308 and configured to reconstruct or build intermediate frames between the frames at the dynamic input buffer 1304. The math and distribution logic circuitry 1306 reconstructs or builds the intermediate frames in a frame building buffer 1310, which is coupled to a video buffer 1312 configured to shift out 30 frames per second to the display 34.

[0165] The programmable math and distribution logic circuitry 1306 may be embedded in a processor that is configured to identify a plurality of points having at least one related characteristic in at least one of the first and second frames. The programmable math and distribution logic circuitry 1306 may also be configured to determine if at least one of the plurality of points has changed its position between the first frame and the second frame. The programmable math and distribution logic circuitry 1306 may be configured to associate the at least one of the plurality of points that has changed its position with at least a first pixel in the first frame and a second pixel in the second frame, and further configured to determine a relationship between a position of the first pixel and a position of the second pixel. The programmable math and distribution logic circuitry 1306 may be configured to determine in the at least one intermediate frame the position of the at least one of the plurality of points that has changed its position based at least in part on the relationship between the positions of the first and second pixel. The programmable math and distribution logic circuitry 1306 may be configured to identify a plurality of points that remained substantially without motion between the first and second frames. The programmable math and distribution logic circuitry 1306 may be configured to define position of pixels of substantially the entire intermediate frames comprising points in motion and substantially stationary points. In determining pixel information for stationary objects in intermediate frames, the programmable math and distribution logic circuitry 1306 is configured to identify in the intermediate frame pixel information for the plurality of points that remained unchanged based at least on one of (a) pixel information in the first frame, (b) pixel information in the second frame, (c) pixel information about the intermediate frame provided from a source of the first and second frames, and (d) averaging pixel information of the first and second frames. As indicated above, the programmable math and distribution logic circuitry 1204 and 1306 may be implemented into or in association with a source telephone and destination telephones, respectively. Such telephones may function in a wired (e.g., POTS) or wireless (e.g., cellular or mobile) telephone network. The invention is not limited to telephone network implementations only, but the invention may be similarly implemented in any wired or wireless communication network(s) that sends and/or receives images or video information.

[0166] The foregoing description details certain embodiments of the invention. It will be appreciated, however, that the invention can be practiced in many ways. For example, several components such as the programmable math and distribution logic circuitry 1306 and 1204 may be implemented in a single or multiple processors, dedicated hardware circuitry, software modules executed in a device such as a telephone or a computer, and many other implementations known in the art. The scope of the invention should therefore be construed in accordance with the appended claims and any equivalents thereof.

Claims

1. A method of constructing at least one intermediate frame of an image between first and second frames, the method comprising:

identifying a plurality of points having at least one related characteristic in at least one of the first and second frames;
determining if at least one of the plurality of points has changed its position between the first frame and the second frame;
associating the at least one of the plurality of points that has changed its position with at least a first pixel in the first frame and a second pixel in the second frame; and
determining a relationship between a position of the first pixel and a position of the second pixel.

2. The method of claim 1, further comprising determining in the at least one intermediate frame the position of the at least one of the plurality of points that has changed its position based at least in part on the relationship between the positions of the first and second pixel.

3. The method of claim 2, further comprising identifying a plurality of points that remained substantially without motion between the first and second frames.

4. The method of claim 3, further comprising defining position of pixels of substantially the entire intermediate frames comprising points in motion and substantially stationary points.

5. The method of claim 1, wherein determining the relationship between the position of the first and second pixels comprises deriving at least one coefficient in a motion equation.

6. The method of claim 1, wherein the motion equation determines position of the at least one of the plurality of points undergoing at least one of spinning, rotational, and translational motion.

7. The method of claim 6, wherein the position of the first and second pixels are identified at least in part by x and y coordinates.

8. The method of claim 1, wherein identifying the plurality of points having at least one related characteristic comprise defining an object.

9. The method of claim 8, wherein defining an object comprises defining at least a portion of a physical object in the image as viewed by an eye of an observer.

10. The method of claim 1, wherein identifying the plurality of points having at least one related characteristic comprises determining whether the plurality of points experience at least one of spinning, rotational, and translational motion.

11. The method of claim 1, wherein determining if at least one of the plurality of points has changed its position comprises identifying a point having a non-zero difference between a pixel position in the first frame and a substantially same position pixel in the second frame.

12. The method of claim 11, further comprising identifying a plurality of objects in the first and second frames.

13. The method of claim 1, further comprising transmitting the first and second frames from a transmitter to a receiver.

14. The method of claim 3, further comprising identifying in the intermediate frame pixel information for the plurality of points that remained unchanged based at least on one of (a) pixel information in the first frame, (b) pixel information in the second frame, (c) pixel information about the intermediate frame provided from a source of the first and second frames, and (d) averaging pixel information of the first and second frames.

15. The method of claim 14, wherein the pixel information for the plurality of points that remained unchanged comprises at least one of color and gray scale values.

16. The method of claim 15, wherein the pixel information for one of the plurality of points comprises substantially the same color information as that of at least one pixel located in a position in the first frame that is associated with substantially the same position of the one of the plurality of points in the first frame.

17. The method of claim 1, further comprising selectively requesting a source transmitter to communicate information about at least one pixel in the intermediate frame.

18. The method of claim 1, wherein determining the relationship between a position of the first pixel and a position of the second pixel comprises at least in part identifying non-zero differences between color or gray scale information of the first pixel and a third pixel located at substantially the same position in the second frame.

19. The method of claim 1, further comprising communicating at least the first and second frames from a source telephone to a destination telephone via a wired or wireless telephone network.

20. A system for constructing at least one intermediate frame of an image between first and second frames, the system comprising:

an identifier circuit configured to identify a plurality of points having at least one related characteristic in at least one of the first and second frames;
a compare circuit configured to determine if at least one of the plurality of points has changed its position between the first frame and the second frame; and
a processing circuit configured to associate the at least one of the plurality of points that has changed its position with at least a first pixel in the first frame and a second pixel in the second frame, and further configured to determine a relationship between a position of the first pixel and a position of the second pixel.

21. The system of claim 20, wherein the processing circuit is configured to determine in the at least one intermediate frame the position of the at least one of the plurality of points that has changed its position based at least in part on the relationship between the positions of the first and second pixel.

22. The system of claim 21, wherein the identifier circuit is configured to identify a plurality of points that remained substantially without motion between the first and second frames.

23. The system of claim 22, wherein the processing circuit is configured to define position of pixels of substantially the entire intermediate frames comprising points in motion and substantially stationary points.

24. The system of claim 20, wherein the processing circuit is configured to derive at least one coefficient in a motion equation.

25. The system of claim 20, wherein the motion equation determines position of the at least one of the plurality of points undergoing at least one of spinning, rotational, and translational motion.

26. The system of claim 25, wherein the position of the first and second pixels are identified at least in part by x and y coordinates.

27. The system of claim 20, wherein the identifier circuit is configured to define points of an object.

28. The system of claim 27, wherein the identifier circuit defines at least a portion of a physical object in the image as viewed by an eye of an observer.

29. The system of claim 20, wherein the identifier circuit determines whether the plurality of points experience at least one of spinning, rotational, and translational motion.

30. The system of claim 20, wherein the compare circuit is configured to identify a point having a non-zero difference between a pixel position in the first frame and a substantially same position pixel in the second frame.

31. The system of claim 30, wherein the identifier circuit is configured to identify a plurality of objects in the first and second frames.

32. The system of claim 20, further comprising a transmitter configured to send the first and second frames to a receiver.

33. The system of claim 22, wherein the processing circuit is configured to identify in the intermediate frame pixel information for the plurality of points that remained unchanged based at least on one of (a) pixel information in the first frame, (b) pixel information in the second frame, (c) pixel information about the intermediate frame provided from a source of the first and second frames, and (d) averaging pixel information of the first and second frames.

34. The system of claim 33, wherein the pixel information for the plurality of points that remained unchanged comprises at least one of color and gray scale values.

35. The system of claim 34, wherein the pixel information for one of the plurality of points comprises substantially the same color information as that of at least one pixel located in a position in the first frame that is associated with substantially the same position of the one of the plurality of points in the first frame.

36. The system of claim 20, wherein the processing circuit is configured to selectively request a source transmitter to send information about at least one pixel in the intermediate frame.

37. The system of claim 20, wherein the processing circuit is configured to identify non-zero differences between color or gray scale information of the first pixel and a third pixel located at substantially the same position in the second frame.

38. The system of claim 20, further comprising a source telephone configured to send at least the first and second frames to a destination telephone via a wired or wireless telephone network.

39. The system of claim 20, wherein a processor comprises the identifier, compare, and processing circuits.

Patent History
Publication number: 20040223050
Type: Application
Filed: Feb 26, 2004
Publication Date: Nov 11, 2004
Patent Grant number: 7414646
Inventor: John W. Callaci (Diamond Bar, CA)
Application Number: 10789437