TRANSFERRING INFORMATION BETWEEN DIFFERENT FREQUENCY BANDS

-

A method for generating at least one of a plurality of data samples in a first frequency band from measurements in a second frequency band. The method includes obtaining a first plurality of samples by measuring a plurality of frequency responses of an environment in the first frequency band, obtaining a second plurality of samples by measuring a first set of frequency responses of the environment in the second frequency band, obtaining a mapping model based on the first plurality of samples and the second plurality of samples, obtaining a third plurality of samples by measuring a second set of frequency responses of the environment in the second frequency band, and obtaining the at least one of the plurality of data samples by applying the mapping model on the third plurality of samples.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of priority from pending U.S. Provisional Patent Application Ser. No. 63/105,368, filed on Oct. 26, 2020, and entitled “SYSTEM AND METHOD FOR TRANSFERRING INFORMATION COLLECTED FROM MEASUREMENTS OR OBSERVATIONS IN ONE FREQUENCY SUBBAND INTO ANOTHER DOMAIN,” which is incorporated herein by reference in its entirety.

TECHNICAL FIELD

The present disclosure generally relates to imaging, and particularly, to imaging by wireless communication systems and machine learning models.

BACKGROUND

Information about characteristics of an event is generally obtained by measuring physical properties of an environment of interest. For example, information is obtained by measuring various responses of an environment to electromagnetic or mechanical waves in different frequency bands. However, measurement in specific frequency bands is not possible in many practical cases. Therefore, a method for transferring information from measurements in one frequency band to another is beneficial. Among conventional systems, sonar systems measure environments in ultrasonic frequency band, radar systems measure environment in radio frequency band, and cameras measure environment in visible light frequency band. Then, images with different characteristics are generated from measurements in various frequency bands. Conventional methods measure an environment in a specific frequency band and do not use information in other frequency bands. However, obtained measurements in different frequency bands are correlated because measurements entail information obtained from a single environment. Correlation between measurements in different frequency bands is not utilized in conventional methods. As a result, a quality of generated images by conventional methods is limited by characteristics of chosen frequency band for measurement. In addition, when measurement instruments fail to measure environment in a specific frequency band, conventional methods fail to provide information about physical properties of environment such as existing objects or people in environment. Furthermore, conventional methods do not translate information from one frequency band to another. As a result, when measurement in one frequency band is not practical, a performance of conventional methods may be degraded. For example, when a closed-circuit television in surveillance systems fails to provide video frames, service of surveillance systems may be stopped, exposing environment of interest to potential risks.

There is, therefore, a need for an imaging method that generates images from measurements in more than one frequency band. There is also a need for a mapping model that translates measured information or data in one frequency band to information or data in another frequency band.

SUMMARY

This summary is intended to provide an overview of the subject matter of the present disclosure, and is not intended to identify essential elements or key elements of the subject matter, nor is it intended to be used to determine the scope of the claimed implementations. The proper scope of the present disclosure may be ascertained from the claims set forth below in view of the detailed description below and the drawings.

In one general aspect, the present disclosure describes an exemplary method for transferring information between different frequency bands by generating at least one of a plurality of data samples in a first frequency band from measurements in a second frequency band. An exemplary method may include obtaining a first plurality of samples in the first frequency band, obtaining a second plurality of samples in the second frequency band, obtaining a mapping model based on the first plurality of samples and the second plurality of samples, obtaining a third plurality of samples in the second frequency band, and obtaining the at least one of the plurality of data samples based on the mapping model and the third plurality of samples. In an exemplary embodiment, the first plurality of samples may be obtained utilizing a plurality of measuring systems. In an exemplary embodiment, obtaining the first plurality of samples may include measuring a plurality of frequency responses of an environment in the first frequency band. In an exemplary embodiment, the second plurality of samples may be obtained utilizing a set of measuring systems. In an exemplary embodiment, obtaining the second plurality of samples may include measuring a first set of frequency responses of the environment in the second frequency band. In an exemplary embodiment, the mapping model may be obtained utilizing one or more processors. In an exemplary embodiment, obtaining the third plurality of samples may include measuring a second set of frequency responses of the environment in the second frequency band. In an exemplary embodiment, the third plurality of samples may be obtained utilizing the set of measuring systems. In an exemplary embodiment, the at least one of the plurality of data samples may be obtained utilizing the one or more processors. In an exemplary embodiment, obtaining the at least one of the plurality of data samples may include applying the mapping model on the third plurality of samples.

In an exemplary embodiment, measuring the plurality of frequency responses may include capturing video data. In an exemplary embodiment, the vide data may be associated with a plurality of video frames. In an exemplary embodiment, the video data may be captured utilizing a plurality of video devices. In an exemplary embodiment, the video data may be captured by capturing video data associated with an ith set of video frames of the plurality of video frames in an ith time interval where i≥1. In an exemplary embodiment, each video frame in the ith set of video frames may be associated with a respective video device of the plurality of video devices. In an exemplary embodiment, measuring the first set of frequency responses may include obtaining a first plurality of channel state information (CSI) samples. In an exemplary embodiment, the first plurality of CSI samples may be obtained utilizing a communication system. In an exemplary embodiment, obtaining the first plurality of CSI samples may include obtaining an ith subset of the first plurality of CSI samples in the ith time interval. In an exemplary embodiment, measuring the second set of frequency responses may include obtaining a second plurality of CSI samples. In an exemplary embodiment, the second plurality of CSI samples may be obtained utilizing the communication system. In an exemplary embodiment, obtaining the at least one of the plurality of data samples may include obtaining at least one of a plurality of images. In an exemplary embodiment, the at least one of the plurality of images may be obtained utilizing the one or more processors. In an exemplary embodiment, obtaining the at least one of the plurality of images may include applying the mapping model on the second plurality of CSI samples.

In an exemplary embodiment, obtaining the ith subset of the first plurality of CSI samples may include transmitting a plurality of transmit signals, receiving a plurality of receive signals associated with the plurality of transmit signals, and generating the ith subset of the first plurality of CSI samples from the plurality of receive signals. In an exemplary embodiment, the plurality of transmit signals may be transmitted by a transmitter of the communication system. In an exemplary embodiment, the plurality of receive signals may be received by a receiver of the communication system.

In an exemplary embodiment, generating the ith subset of the first plurality of CSI samples may include obtaining a plurality of raw CSI samples and extracting the ith subset of the first plurality of CSI samples from the plurality of raw CSI samples. In an exemplary embodiment, the plurality of raw CSI samples may be obtained based on the plurality of receive signals. In an exemplary embodiment, extracting the ith subset of the first plurality of CSI samples may include compensating a phase offset of each of the plurality of raw CSI samples.

In an exemplary embodiment, transmitting the plurality of transmit signals may include transmitting each of the plurality of transmit signals in a respective sub-carrier of a plurality of sub-carriers and at a respective transmit moment of a plurality of transmit moments in the ith time interval. In an exemplary embodiment, each of the plurality of transmit signals may be transmitted by a respective transmit antenna of a plurality of transmit antennas. In an exemplary embodiment, the plurality of transmit antennas may be associated with the transmitter. In an exemplary embodiment, receiving the plurality of receive signals may include receiving each of the plurality of receive signals in a respective sub-carrier of the plurality of sub-carriers and at a respective receive moment of a plurality of receive moments in the ith time interval. In an exemplary embodiment, each of the plurality of receive signals may be received by a respective receive antenna of a plurality of receive antennas. In an exemplary embodiment, the plurality of receive antennas may be associated with the receiver.

In an exemplary embodiment, obtaining the plurality of raw CSI samples may include obtaining a CSI array of size N1×N2×N3 where N1=2MtMr, Mt is a number of the plurality of transmit antennas, Mr is a number of the plurality of receive antennas, N2 is a number of the plurality of sub-carriers, and N3 is a number of the plurality of receive moments. In an exemplary embodiment, obtaining the CSI array may include generating a plurality of CSI vectors and generating the CSI array from the plurality of CSI vectors. In an exemplary embodiment, each of the plurality of CSI vectors may be of size

N 1 2 × 1 .

In an exemplary embodiment, generating each of the plurality of CSI vectors may include estimating a respective multiple-input multiple output (MIMO) channel of a plurality of MIMO channels. In an exemplary embodiment, estimating each of the plurality of MIMO channels may include processing a respective subset of the plurality of receive signals. In an exemplary embodiment, each of the plurality of MIMO channels may include a wireless channel between the transmitter and the receiver in a respective sub-carrier of the plurality of sub-carriers and at a respective receive moment of the plurality of receive moments. In an exemplary embodiment, generating the CSI array may include setting each element in the CSI array to one of a real part or an imaginary part of a respective element in a respective CSI vector of the plurality of CSI vectors.

In an exemplary embodiment, obtaining the plurality of raw CSI samples may further include extracting a CSI sub-array from the CSI array. In an exemplary embodiment, the CSI sub-array may be of size N1×N2×N4 where 1≤N4<N3. In an exemplary embodiment, extracting the CSI sub-array may include randomly selecting N4 sub-arrays from the CSI array and generating the CSI sub-array from N4 sub-arrays. In an exemplary embodiment, each of N4 sub-arrays may be of size N1×N2. In an exemplary embodiment, N4 sub-arrays may be randomly selected out of N3 sub-arrays of size N1×N2 in the CSI array. In an exemplary embodiment, generating the CSI sub-array may include stacking N4 sub-arrays.

In an exemplary embodiment, obtaining the mapping model may include training a neural network. In an exemplary embodiment, training the neural network may include initializing the neural network and repeating an iterative process until a termination condition is satisfied. In an exemplary embodiment, the neural network may be initialized with a plurality of initial weights. An exemplary iterative process may include extracting at least one of a plurality of training images from an output of the neural network, generating a plurality of updated weights, replacing the plurality of updated weights with the plurality of initial weights. In an exemplary embodiment, extracting the at least one of the plurality of training images may include applying the neural network on the ith subset of the first plurality of CSI samples. In an exemplary embodiment, generating the plurality of updated weights may include minimizing a loss function of the at least one of the plurality of training images and the ith set of video frames.

In an exemplary embodiment, applying the neural network on the ith subset of the first plurality of CSI samples may include obtaining a first plurality of feature maps, obtaining a first feature vector by flattening the first plurality of feature maps, obtaining a second feature vector by augmenting the first feature vector with a condition vector, obtaining a second plurality of feature maps based on the second feature vector, obtaining a third plurality of feature maps based on the second plurality of feature maps, and upsampling the third plurality of feature maps. In an exemplary embodiment, obtaining the first plurality of feature maps may include applying a first plurality of convolutional layers of the neural network on the ith subset of the first plurality of CSI samples. An exemplary condition vector may include zero valued entries except for a jth entry where 1≤j≤N1 and N1 is a number of the plurality of video devices. An exemplary jth entry may include a non-zero value. An exemplary jth entry may be associated with a jth video device of the plurality of video devices. In an exemplary embodiment, obtaining the second plurality of feature maps may include feeding the second feature vector to an input of a first fully connected layer of the neural network. In an exemplary embodiment, obtaining the third plurality of feature maps may include applying a first residual neural network (ResNet) of the neural network on the second plurality of feature maps.

In an exemplary embodiment, obtaining the second plurality of feature maps may further include extracting a third feature vector from an output of the first fully connected layer, generating a first latent feature map from the third feature vector, obtaining a second latent feature map based on the first latent feature map, obtaining a fourth plurality of feature maps based on the second latent feature map, and generating the second plurality of feature maps from the fourth plurality of feature maps. In an exemplary embodiment, generating the first latent feature map may include generating a matrix from a plurality of elements in the third feature vector. In an exemplary embodiment, obtaining the second latent feature map may include applying a padding process on the first latent feature map.

In an exemplary embodiment, obtaining the fourth plurality of feature maps may include applying a second plurality of convolutional layers of the neural network on the second latent feature map. In an exemplary embodiment, applying the second plurality of convolutional layers may include extracting the fourth plurality of feature maps from an output of a (2, L2)th convolutional layer of the second plurality of convolutional layers where L2 is a number of the second plurality of convolutional layers. In an exemplary embodiment, extracting the fourth plurality of feature maps may include obtaining a (2, l2+1)th plurality of feature maps where 1≤l2≤L2. In an exemplary embodiment, obtaining the (2, l2+1)th plurality of feature maps may include generating a (2, l2)th plurality of filtered feature maps, generating a (2, l2)th plurality of normalized feature maps based on the (2, l2)th plurality of filtered feature maps, and generating the (2, l2+1)th plurality of feature maps based on the (2, l2)th plurality of normalized feature maps. In an exemplary embodiment, generating the (2, l2)th plurality of filtered feature maps may include applying a (2, l2)th plurality of filters on a (2, l2)th plurality of feature maps. In an exemplary embodiment, a (2, 1)st plurality of feature maps may include the second latent feature map. In an exemplary embodiment, generating the (2, l2)th plurality of normalized feature maps may include applying the instance normalization process on the (2, l2)th plurality of filtered feature maps. In an exemplary embodiment, generating the (2, l2+1)th plurality of feature maps may include implementing a (2, l2)th non-linear activation function on each of the (2, l2)th plurality of normalized feature maps. In an exemplary embodiment, generating the second plurality of feature maps may include applying the padding process on each of the fourth plurality of feature maps.

In an exemplary embodiment, upsampling the third plurality of feature maps may include extracting a jth training image of the plurality of training images from an output of a (3, L3)th convolutional layer of a third plurality of convolutional layers of the neural network where L3 is a number of the third plurality of convolutional layers. In an exemplary embodiment, extracting the jth training image may include obtaining a (3, l3+1)th plurality of feature maps where 1≤l3≤L3. In an exemplary embodiment, obtaining the (3, l3+1)th plurality of feature maps may include generating an l3th plurality of upsampled feature maps, generating a (3, l3)th plurality of filtered feature maps from the l3th plurality of upsampled feature maps, generating a (3, l3)th plurality of normalized feature maps from the (3, l3)th plurality of filtered feature maps, and generating the (3, l3+1)th plurality of feature maps from (3, l3)th plurality of normalized feature maps. In an exemplary embodiment, generating the l3th plurality of upsampled feature maps may include implementing an upsampling process on a (3, l3)th plurality of feature maps. In an exemplary embodiment, a (3, 1)st plurality of feature maps may include the third plurality of feature maps. In an exemplary embodiment, generating the (3, l3)th plurality of filtered feature maps may include applying a (3, l3)th plurality of filters on the l3th plurality of upsampled feature maps. In an exemplary embodiment, generating the (3, l3)th plurality of normalized feature maps may include applying the instance normalization process on the (3, l3)th plurality of filtered feature maps. In an exemplary embodiment, generating the (3, l3+1)th plurality of feature maps may include implementing a (3, l3)th non-linear activation function on each of the (3, l3)th plurality of normalized feature maps.

In an exemplary embodiment, applying the neural network on the ith subset of the first plurality of CSI samples may include obtaining a fifth plurality of feature maps, and applying a plurality of sub-network of the neural network on the fifth plurality of feature maps. In an exemplary embodiment, obtaining the fifth plurality of feature maps may include applying a fourth plurality of convolutional layers of the neural network on the ith subset of the first plurality of CSI samples. In an exemplary embodiment, applying the plurality of sub-network on the fifth plurality of feature maps may include applying a jth sub-network of the plurality of sub-networks on the fifth plurality of feature maps. An exemplary jth sub-network may be associated with a jth video device of the plurality of video devices. In an exemplary embodiment, applying the jth sub-network may include obtaining a sixth plurality of feature maps based on the fifth plurality of feature maps, obtaining a seventh plurality of feature maps based on the sixth plurality of feature maps, and upsampling the seventh plurality of feature maps. In an exemplary embodiment, obtaining the sixth plurality of feature maps may include feeding the fifth plurality of feature maps to an input of a second fully connected layer of the jth sub-network. In an exemplary embodiment, obtaining the seventh plurality of feature maps may include applying a second residual neural network (ResNet) of the jth sub-network on the sixth plurality of feature maps.

Other exemplary systems, methods, features and advantages of the implementations will be, or will become, apparent to one of ordinary skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description and this summary, be within the scope of the implementations, and be protected by the claims herein.

BRIEF DESCRIPTION OF THE DRAWINGS

The drawing figures depict one or more implementations in accord with the present teachings, by way of example only, not by way of limitation. In the figures, like reference numerals refer to the same or similar elements.

FIG. 1A shows a flowchart of a method for generating at least one of a plurality of data samples in a first frequency band from measurements in a second frequency band, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 1B shows a flowchart for obtaining a plurality of channel state information (CSI) samples, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 1C shows a flowchart for generating a subset of a plurality of CSI samples, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 1D shows a flowchart for obtaining a plurality of raw CSI samples, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 1E shows a flowchart for obtaining a CSI array, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 1F shows a flowchart for extracting a CSI sub-array from a CSI array, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 1G shows a flowchart for obtaining a mapping model, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 1H shows a flowchart for repeating an iterative process, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 1I shows a flowchart for applying a first implementation of a neural network on a subset of a plurality of CSI samples, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 1J shows a flowchart for obtaining a first plurality of feature maps, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 1K shows a flowchart for obtaining a second plurality of feature maps, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 1L shows a flowchart for upsampling a plurality of feature maps, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 1M shows a flowchart for applying a second implementation of a neural network on a subset of a plurality of CSI samples, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 1N shows a flowchart for applying a sub-network on a plurality of feature maps, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 2 shows a schematic of a system for generating at least one of a plurality of data samples in a first frequency band from measurements in a second frequency band, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 3 shows a schematic of a plurality of video frames and a plurality of CSI samples, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 4A shows a schematic of a neural network and a plurality of training images, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 4B shows a schematic of a first implementation of a neural network, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 4C shows a schematic of a convolutional layer, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 4D shows a schematic of a second implementation of a neural network, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 4E shows a schematic of a sub-network, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 5 shows a high-level functional block diagram of a computer system, consistent with one or more exemplary embodiments of the present disclosure.

DETAILED DESCRIPTION

In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. However, it should be apparent that the present teachings may be practiced without such details. In other instances, well known methods, procedures, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present teachings.

The following detailed description is presented to enable a person skilled in the art to make and use the methods and devices disclosed in exemplary embodiments of the present disclosure. For purposes of explanation, specific nomenclature is set forth to provide a thorough understanding of the present disclosure. However, it will be apparent to one skilled in the art that these specific details are not required to practice the disclosed exemplary embodiments. Descriptions of specific exemplary embodiments are provided only as representative examples. Various modifications to the exemplary implementations will be readily apparent to one skilled in the art, and the general principles defined herein may be applied to other implementations and applications without departing from the scope of the present disclosure. The present disclosure is not intended to be limited to the implementations shown, but is to be accorded the widest possible scope consistent with the principles and features disclosed herein.

Herein is disclosed an exemplary method and system for transferring information between different frequency bands by generating a number of images from channel state information (CSI) samples in a communication system. An exemplary CSI sample may be referred to as channel properties of a communication link. An exemplary CSI sample may describe how a signal propagates from a transmitter to a receiver and represents a combined effect of scattering, fading, and power decay with distance. An exemplary method may include capturing video data associated with a number of video frames by a number of video devices. Exemplary video devices may capture video data associated with different views of an environment. Then, a neural network is trained to map CSI samples of a wireless channel to video frames captured from an environment of the wireless channel. Exemplary video devices may capture a number of video frames and simultaneously a communication system may estimate CSI samples of the wireless channel. Next, an exemplary neural network may be trained for mapping CSI samples to video frames. An exemplary trained neural network may then be utilized for generating images from CSI samples. Images that are generated by an exemplary neural network may provide information of different views of an environment. Generated images may be substituted with video frames to resolve a need for video devices or may be combined with video frames to achieve more information.

FIG. 1A shows a flowchart of a method for generating at least one of a plurality of data samples in a first frequency band from measurements in a second frequency band, consistent with one or more exemplary embodiments of the present disclosure. In an exemplary embodiment, a method 100 may include obtaining a first plurality of samples in a first frequency band (step 102), obtaining a second plurality of samples in a second frequency band (step 104), obtaining a mapping model based on the first plurality of samples and the second plurality of samples (step 106), obtaining a third plurality of samples in the second frequency band (step 108), and obtaining the data sample based on the mapping model and the third plurality of samples (step 110).

FIG. 2 shows a schematic of a system for generating at least one of a plurality of data samples in a first frequency band from measurements in a second frequency band, consistent with one or more exemplary embodiments of the present disclosure. In an exemplary embodiment, different steps of method 100 may be implemented utilizing a system 200. In an exemplary embodiment, system 200 may include a plurality of video devices 202, a communication system 204, and a processor 206. In an exemplary embodiment, communication system 204 may include a transmitter 204A and a receiver 204B. In an exemplary embodiment, plurality of video devices 202 may provide processor 206 with video data. Exemplary video data may be converted into a plurality of video frames. In an exemplary embodiment, communication system 204 may provide processor 206 with a first plurality of CSI samples. In an exemplary embodiment, receiver 204B may obtain the first plurality of CSI samples and may send the first plurality of CSI samples to processor 206. Then, in an exemplary embodiment, processor 206 may obtain a mapping model for translating information in the first plurality of CSI samples to information in the plurality of video frames. In an exemplary embodiment, after obtaining the mapping model, processor 206 may generate at least one of a plurality of images 208 from a second plurality of CSI samples. In an exemplary embodiment, processor 206 may generate the at least one of plurality of images 208 as a supplement or a substitute of video frames captured from plurality of video devices 202 in different operating scenarios. In an exemplary embodiment, in a normal operating scenario, the at least one of plurality of images 208 may be combined with a plurality of video frames obtained by plurality of video devices 202 for extracting more information from an environment 210. In an exemplary embodiment, in a critical operating scenario, when plurality of video devices 202 fail to capture video data associated with video frames from environment 210, processor 206 may substitute images captured from plurality of video devices 202 and may utilized generated at least one of plurality of images 208 to transmit to monitor environment 210.

For further detail with respect to step 102, FIG. 3 shows a schematic of a plurality of video frames and a plurality of CSI samples, consistent with one or more exemplary embodiments of the present disclosure. Referring to FIGS. 1A, 2A, and 3, in an exemplary embodiment, step 102 may include obtaining the first plurality of samples. In an exemplary embodiment, the first plurality of samples may be obtained utilizing a first plurality of measuring systems. In an exemplary embodiment, obtaining the first plurality of samples may include measuring a first plurality of frequency responses of environment 210 in the first frequency band. In an exemplary embodiment, the first plurality of measuring systems may include plurality of video devices 202. In an exemplary embodiment, the first frequency band may include a visible light frequency range. In an exemplary embodiment, measuring the first plurality of frequency responses may include capturing video data associated with a plurality of video frames 302. In an exemplary embodiment, video data associated with plurality of video frames 302 may be captured utilizing plurality of video devices 202. In an exemplary embodiment, plurality of video devices 202 may include a number of cameras. An exemplary camera may capture a scene of environment 210. An exemplary camera may be fixed at a place in environment 210 and may capture video data. Exemplary video data may be converted into plurality of video frames 302 utilizing video codecs of cameras. Exemplary video data may be stored utilizing a video codec in different video coding formats such as H.262, MPEG 4 Visual, H.264, and HEVC. In an exemplary embodiment, video data associated with plurality of video frames 302 may be captured by capturing an ith set of video frames 304 of plurality of video frames 302 in an ith time interval 306 where i≥1. In an exemplary embodiment, video data associated each video frame in ith set of video frames 304 may be captured utilizing a respective video device of plurality of video devices 202. Exemplary video data may be converted into ith set of video frames 304 by plurality of video devices 202. In an exemplary embodiment, a frame rate of each of plurality of video devices 202 may be high, compared with movements of objects in environment 210. As a result, in an exemplary embodiment, movement of objects in consecutive video frames captured by each of plurality of video devices 202 may not be observable. In an exemplary embodiment, video data associated with a plurality of video frames 308 captured in small-time intervals may be discarded to reduce a computational complexity of method 100, that is, plurality of video frames 302 may be obtained by temporal downsmapling plurality of video frames 308. In an exemplary embodiment, video frames captured by plurality of video devices 202 may include color images. Exemplary color images may be transferred to grayscale images. In an exemplary embodiment, each of plurality of video frames 302 may include a respective grayscale image. In an exemplary embodiment, video data associated with video frames captured by plurality of video devices 202 may be of large size, resulting in high computational complexity for obtaining the mapping model. To prevent high computational complexity, exemplary video frames may be resized to a smaller size.

In further detail with regard to step 104, FIG. 1B shows a flowchart for obtaining a plurality of CSI samples, consistent with one or more exemplary embodiments of the present disclosure. Referring to FIGS. 1B, 2A and 3, in an exemplary embodiment, step 104 may include obtaining the second plurality of samples. In an exemplary embodiment, the second plurality of samples may be obtained utilizing a set of measuring systems. In an exemplary embodiment, obtaining the second plurality of samples may include measuring a first set of frequency responses of environment 210 in the second frequency band. In an exemplary embodiment, the set of measuring systems may include communication system 204. In an exemplary embodiment, the second frequency band may include a microwave frequency range. In an exemplary embodiment, measuring the first set of frequency responses may include obtaining a first plurality of CSI samples 310. Specifically, in an exemplary embodiment, first plurality of CSI samples 310 may be obtained from receiver 204B. In an exemplary embodiment, obtaining first plurality of CSI samples 310 may include obtaining an ith subset 312 of first plurality of CSI samples 310 in time interval 306. In an exemplary embodiment, obtaining subset 312 may include transmitting a plurality of transmit signals (step 112), receiving a plurality of receive signals associated with the plurality of transmit signals (step 114), and generating subset 312 from the plurality of receive signals (step 116).

In an exemplary embodiment, step 112 may include transmitting the plurality of transmit signals. In an exemplary embodiment, the plurality of transmit signals may be transmitted to obtain subset 312. In an exemplary embodiment, the plurality of transmit signals may be transmitted by transmitter 204A. In an exemplary embodiment, communication system 204 may include an orthogonal frequency division multiplexing (OFDM) communication system. In an exemplary embodiment, transmitter 204A may transmit each of the plurality of transmit signals in a respective resource block, that is, in a respective sub-carrier and in a respective time frame. In an exemplary embodiment, transmitting the plurality of transmit signals may include transmitting each of the plurality of transmit signals in a respective sub-carrier of a plurality of sub-carriers and at a respective transmit moment of a plurality of transmit moments in time interval 306. Exemplary plurality of transmit signals may include a plurality of packets. Exemplary plurality of packets may be transmitted by transmitting the plurality of transmit signals. In an exemplary embodiment, each packet may include a respective preamble. In an exemplary embodiment, each of the plurality of packets may be transmitted in a respective sub-carrier and at a respective transmit moment. In an exemplary embodiment, transmitter 204A may include an access point of a Wi-Fi system. An exemplary access point may transmit signals according to an OFDM Wi-Fi standard such as 802.11a, 802.11n, and 802.11g. An exemplary access point may successively transmit signals in a series of transmit moments. An exemplary transmit moment may refer to a time instant that transmitter 204A transmits a transmit signal. An exemplary Wi-Fi standard may determine the time interval between consecutive transmit moments. An exemplary OFDM Wi-Fi standard may determine a frequency band, a number of sub-carriers, and a format of each of the plurality of packets. In an exemplary embodiment, communication system 204 may include a multiple-input multiple-output (MIMO) communication system. In an exemplary embodiment, transmitter 204A may include a plurality of transmit antennas. In an exemplary embodiment, transmitter 204A may transmit each of the plurality of transmit signals by a respective transmit antenna of the plurality of transmit antennas. Each of exemplary transmit antennas may transmit a transmit signal according to a MIMO Wi-Fi standard such as 802.11n.

In an exemplary embodiment, step 114 may include receiving the plurality of receive signals. In an exemplary embodiment, the plurality of receive signals may be received by receiver 204B. In an exemplary embodiment, receiver 204B may include an access point of a Wi-Fi system. In an exemplary embodiment, the plurality of received signals may include a noisy version of the plurality of transmit signals, that is, the plurality of receive signals may include a summation of attenuated transmit signals and thermal noise. In an exemplary embodiment, communication system 204 may include an OFDM communication system. In an exemplary embodiment, receiver 204B may receive each of the plurality of receive signals in a respective resource block, that is, in a respective sub-carrier and in a respective time frame. In an exemplary embodiment, receiving the plurality of receive signals may include receiving each of the plurality of receive signals in a respective sub-carrier of the plurality of sub-carriers and at a respective receive moment of a plurality of receive moments in time interval 306. An exemplary access point may successively receive signals in a series of receive moments. An exemplary receive moment may refer to a time instant that receiver 204B receives a receive signal. In an exemplary embodiment, the plurality of packets may be received by receiving the plurality of receive signals. In an exemplary embodiment, each of the plurality of packets may be received in a respective sub-carrier and at a respective receive moment. In an exemplary embodiment, communication system 204 may include a MIMO communication system. In an exemplary embodiment, receiver 204B may include a plurality of receive antennas. In an exemplary embodiment, receiver 204B may receive each of the plurality of receive signals by a respective receive antennas of the plurality of receive antennas.

For further detail regarding step 116, FIG. 1C shows a flowchart for generating a subset of a plurality of CSI samples, consistent with one or more exemplary embodiments of the present disclosure. Referring to FIGS. 1C and 3, in an exemplary embodiment, generating subset 312 may include obtaining a plurality of raw CSI samples (step 118) and extracting subset 312 from the plurality of raw CSI samples (step 120). In an exemplary embodiment, subset 312 may include CSI samples that are obtained in time interval 306, that is a time interval of capturing video data associated with ith set of video frames 304. In other words, in an exemplary embodiment, subset 312 may be obtained simultaneously with capturing video data associated with ith set of video frames 304.

For further detail regarding step 118, FIG. 1D shows a flowchart for obtaining a plurality of raw CSI samples, consistent with one or more exemplary embodiments of the present disclosure. Referring to FIGS. 1D, 2A, and 3, in an exemplary embodiment, the plurality of raw CSI samples may be obtained based on the plurality of receive signals. In an exemplary embodiment, receiver 204B may process the plurality of receive signals and provide processor 206 with the plurality of raw CSI samples. In an exemplary receiver 204B, that is, a Wi-Fi access point, may process the plurality of receive signals according to channel estimation procedures defined by Wi-Fi standards such as 802.11n. Exemplary plurality of raw CSI samples may be referred to as an array of complex numbers representing attenuation and phase shift of receiver signals compared with respective transmit signals. In an exemplary embodiment, obtaining the plurality of raw CSI samples may include obtaining a CSI array based on the plurality of receive signals (step 122) and extracting a CSI sub-array from the CSI array (step 124).

For further detail with respect to step 122, FIG. 1E shows a flowchart for obtaining a CSI array, consistent with one or more exemplary embodiments of the present disclosure. In an exemplary embodiment, the CSI array may be of size N1×N2×N3 where N1=2MtMr, Mt is a number of the plurality of transmit antennas, Mr is a number of the plurality of receive antennas, N2 is a number of the plurality of sub-carriers, and N3 is a number of the plurality of receive moments. In an exemplary embodiment, obtaining the CSI array may include generating a plurality of CSI vectors (step 125) and generating the CSI array from the plurality of CSI vectors (step 126).

In an exemplary embodiment, step 125 may include generating a plurality of CSI vectors. In an exemplary embodiment, each of the plurality of CSI vectors may be of size

N 1 2 × 1 .

In an exemplary embodiment, generating each of the plurality of CSI vectors may include estimating a respective MIMO channel of a plurality of MIMO channels. In an exemplary embodiment, each of the plurality of MIMO channels may include a wireless channel between transmitter 204A and receiver 204B in a respective sub-carrier of the plurality of sub-carriers and at a respective receive moment of the plurality of receive moments. In an exemplary embodiment, estimating each of the plurality of MIMO channels may include processing a respective subset of the plurality of receive signals. In an exemplary embodiment, communication system 204 may include a Wi-Fi network. To measure CSI, an exemplary Wi-Fi transmitter (similar to transmitter 204A) may send long training symbols including pre-defined symbols for each subcarrier, in a respective packet preamble. In an exemplary embodiment, when long training symbols are received by a Wi-Fi receiver (similar to receiver 204B), the Wi-Fi receiver may estimate a CSI matrix in each sub-carrier and at a receive moment by processing the plurality of received signals and the original long training symbols.

An exemplary processing procedure may be performed according to channel estimation procedures defined in Wi-Fi standards such as 802.11n. In an exemplary embodiment, a CSI of each of the plurality of MIMO channels may include a CSI matrix of size Mr×Mt. In an exemplary embodiment, each of the plurality of CSI vectors may be obtained by stacking columns of a respective CSI matrix. As a result, in an exemplary embodiment, each of the plurality of CSI vectors may be of size MrMt×1, that is,

N 1 2 × 1 .

In an exemplary OFDM system, CSI matrices of MIMO channels in different sub-carriers and different moments may differ from each other. In an exemplary embodiment, each of the plurality of CSI vectors may include elements of a CSI matrix in a respective sub-carrier and at a respective moment. As a result, a number of the plurality of CSI vectors may be equal to N2×N3.

In an exemplary embodiment, step 126 may include generating the CSI array from the plurality of CSI vectors. In an exemplary embodiment, generating the CSI array may include separating real parts and imaginary parts of complex numbers in each of CSI vectors and obtain an expanded CSI vector by generating a vector including vectors of real parts alongside vectors of imaginary parts. As a result, in an exemplary embodiment, each element in the CSI array to one of a real part or an imaginary part of a respective element in a respective CSI vector of the plurality of CSI vectors. Therefore, in an exemplary embodiment, a length of each expanded CSI vector is twice a length of each CSI vector. In an exemplary embodiment, since a length of each CSI vector is

N 1 2 × 1 ,

a length of each expanded CSI vector may be equal to N1×1. In an exemplary embodiment, as described in details of step 122, a number of the plurality of CSI vectors may be equal to N2×N3. As a result, in an exemplary embodiment, the CSI array may be obtained by generating a 3D array of size N1×N2×N3. An exemplary 3D array may include N2×N3 vectors. Each of exemplary vectors in the 3D array may be equal to a respective expanded CSI array.

Referring again to FIG. 1D, in an exemplary embodiment, step 124 may include extracting the CSI sub-array from the CSI array. For further detail with regard to step 124, FIG. 1F shows a flowchart for extracting a CSI sub-array from the CSI array, consistent with one or more exemplary embodiments of the present disclosure. In an exemplary embodiment, extracting the CSI sub-array may include randomly selecting N4 sub-arrays from the CSI array (step 128) and generating the CSI sub-array from N4 sub-arrays (step 130). In an exemplary embodiment, the CSI array may include CSI samples of a wireless channel between transmitter 204A and receiver 204B in time interval 306. In an exemplary embodiment, the wireless channel may be estimated N3 times in each of the plurality of sub-carriers by sending packets from transmitter 204A to receiver 204B in a constant time difference. However, in an exemplary embodiment, transmitter 204A may not transmit consecutive packets with a constant time difference due to a quality of the wireless channel or limited communication resources. As a result, in an exemplary embodiment, consecutive packets may be sent in variable time differences. In addition, in an exemplary embodiment, the mapping model may be biased to a constant time difference of transmitting consecutive packets. In an exemplary embodiment, implementing step 124 may prevent the mapping model to be biased to a constant time difference between consecutive packets by selecting CSI samples from packets with random time differences.

Referring to FIGS. 1F and 3, in an exemplary embodiment, step 128 may include randomly selecting N4 sub-arrays from the CSI array. In an exemplary embodiment, the CSI sub-array may be of size N1×N2×N4 where 1≤N4<N3. In an exemplary embodiment, the CSI array may include subset 312 of plurality of CSI samples 310. In an exemplary embodiment, the CSI sub-array may include a subset 314 of plurality of CSI samples 310. In an exemplary embodiment, each sub-array of size N1×N2 in the CSI array may include CSI samples obtained at a respective receive moment in time interval 306. In an exemplary embodiment, the CSI array may include N3 sub-arrays where N3 is a number of the plurality of receive moments. In an exemplary embodiment, each of N4 sub-arrays may be of size N1×N2, that is, each of N4 sub-arrays may include CSI samples obtained in a respective receive moment of the plurality of receive moments. In an exemplary embodiment, N4 sub-arrays may be randomly selected out of N3 sub-arrays of size N1×N2 in the CSI array. As a result, in an exemplary embodiment, N3 sub-arrays may include CSI samples obtained at various receive moments with random time differences, enhancing a generalization of the mapping model.

In an exemplary embodiment, step 130 may include generating the CSI sub-array from N4 sub-arrays. In an exemplary embodiment, generating the CSI sub-array may include stacking N4 sub-arrays. In an exemplary embodiment, the CSI sub-array may be obtained by concatenating sub-arrays of the CSI array in a block matrix. Then, in an exemplary embodiment, the CSI sub-array may be obtained by multiplying the block matrix with a selection matrix. In an exemplary embodiment, each row of the selection matrix may be randomly selected from columns of an identity matrix. An exemplary identity matrix may be referred to as a square matrix with all elements of the principal diagonal equal to one and all other elements equal to zero. An exemplary identity matrix may be obtained from a memory coupled to processor 206.

Referring again to FIGS. 1C and 3, in an exemplary embodiment, step 120 may include extracting subset 312 from the plurality of raw CSI samples. In an exemplary embodiment, extracting subset 312 may include compensating a phase offset of each of the plurality of raw CSI samples. In an exemplary embodiment, phases the plurality of raw CSI samples may be affected by several sources of error such as carrier frequency offset (CFO) and sampling frequency offset (SFO). As a result, in an exemplary embodiment, a performance of the mapping model may be impacted by erroneous phases of the plurality of raw CSI samples. In an exemplary embodiment, a linear transformation referred to as phase sanitization, may remove CFO and SFO from phases of the plurality of raw CSI samples. In an exemplary embodiment, phase sanitization may include obtaining α0 and α1 by the following:

α 0 = 1 N 2 f = 1 N 2 ϕ f Equation ( 1 ) α 1 = ϕ N 2 - ϕ 1 2 π N 2 Equation ( 2 )

where ϕf is a phase of a raw CSI sample in a sub-carrier f of the plurality of subcarriers and 1≤f≤N2. Next, in an exemplary embodiment, a compensated phase of a CSI sample may be obtained according to an operation defined by:


{circumflex over (ϕ)}ff−(α1f+α0)  Equation (3)

Referring again to FIGS. 1A and 3, in an exemplary embodiment, step 106 may include obtaining the mapping model based on plurality of video frames 302 and first plurality of CSI samples 310. In further detail regarding step 106, FIG. 1G shows a flowchart for obtaining a mapping model, consistent with one or more exemplary embodiments of the present disclosure.

FIG. 4A shows a schematic of a neural network and a plurality of training images, consistent with one or more exemplary embodiments of the present disclosure. In an exemplary embodiment, obtaining the mapping model in step 106 may include training a neural network 400. Referring to FIGS. 1G, 2 and 4A, an exemplary mapping model may be obtained utilizing processor 206. In an exemplary embodiment, training neural network 400 may include initializing neural network 400 (step 132) and repeating an iterative process (step 134).

For further detail with respect to step 132, in an exemplary embodiment, initializing neural network 400 may include generating the plurality of initial weights. In an exemplary embodiment, generating the plurality of initial weights may include generating a plurality of random variables from a probability distribution. In an exemplary embodiment, the probability distribution may be determined according to a required range of each of the plurality of initial weights. In an exemplary embodiment, the probability distribution may include a Gaussian distribution or a uniform distribution.

In further detail regarding step 134, FIG. 1H shows a flowchart for repeating an iterative process, consistent with one or more exemplary embodiments of the present disclosure. Referring to FIGS. 1H, 2, and 4A, an exemplary iterative process may include extracting at least one of a plurality of training images 402 from an output of neural network 400 (step 136), generating a plurality of updated weights (step 138), replacing the plurality of updated weights with the plurality of initial weights (step 140). In an exemplary embodiment, the iterative process may be repeated until a termination condition is satisfied. In an exemplary embodiment, neural network 400 may be trained to approximate training image 402 with a respective video frame of plurality of video frames 302. An exemplary termination condition may include a threshold for an error of approximation of training image 402 with video frame 304. An exemplary approximation error may be equal to a mean squared error. In an exemplary embodiment, each of plurality of training images 402 may include an approximation of a respective video frame in ith set of video frames 304. In an exemplary embodiment, each video frame in ith set of video frames 304 may comprise a representation of data captured related to an image with a respective view of environment 210. As a result, in an exemplary embodiment, each of plurality of training images 402 may include an approximation of an image with a respective view of environment 210.

For further detail with regard to step 136, FIG. 1I shows a flowchart for applying a first implementation of a neural network on a subset of a plurality of CSI samples, consistent with one or more exemplary embodiments of the present disclosure. In an exemplary embodiment, method 136A may include a first implementation of step 136.

FIG. 4B shows a schematic of a first implementation of a neural network, consistent with one or more exemplary embodiments of the present disclosure. Referring to FIGS. 1I, 3-4B, in an exemplary embodiment, neural network 400A may include a first implementation of neural network 400. In an exemplary embodiment, neural network 400A may generate only one of plurality of training images 402. In an exemplary embodiment, method 136A may be implemented to extract a jth training image 403 of plurality of training images 402 from neural network 400A where 1≤j≤N1 and N1 is a number of plurality of video devices 212. In other words, in an exemplary embodiment, neural network 400A may generate jth training image 403. In an exemplary embodiment, extracting the at least one of plurality of training images 402 may include extracting jth training image 403 from neural network 400A. In other words, in an exemplary embodiment, extracting the at least one of plurality of training images 402 from neural network 400 in step 136 may include extracting jth training image 403 from neural network 400A in step 136A. In an exemplary embodiment, extracting jth training image 403 from neural network 400A may include applying neural network 400A on subset 314 of first plurality of CSI samples 310. In an exemplary embodiment, applying neural network 400A on subset 314 may include obtaining a first plurality of feature maps (step 142), obtaining a first feature vector by flattening the first plurality of feature maps (step 144), obtaining a second feature vector by augmenting the first feature vector (step 146), obtaining a second plurality of feature maps based on the second feature vector (step 148), obtaining a third plurality of feature maps based on the second plurality of feature maps (step 150), and upsampling the third plurality of feature maps (step 152). An exemplary feature map may be referred to as a 2D matrix of real valued data. An exemplary convolutional layer may receive a block of input feature maps and may generate a block of output feature map by applying convolution operations on the block of input feature maps.

In an exemplary embodiment, step 142 may include obtaining a first plurality of feature maps. Referring to FIGS. 3 and 4B, in an exemplary embodiment, neural network 400A may include a first plurality of convolutional layers 404. In an exemplary embodiment, obtaining a first plurality of feature maps 406 may include applying first plurality of convolutional layers 404 on subset 314 of first plurality of CSI samples 310 as described below. In an exemplary embodiment, applying first plurality of convolutional layers 404 may include extracting first plurality of feature maps 406 from an output of a (1, L1)th convolutional layer 408 of first plurality of convolutional layers 404 where L1 is a number of first plurality of convolutional layers 404. An exemplary convolutional layer may be referred to as a number of filters followed by a number of non-linear activation functions. Exemplary filters may apply convolution operations on input feature maps of the convolutional layer. An exemplary convolutional layer, as described below in detail, may further include a normalization process.

FIG. 4C shows a schematic of a convolutional layer, consistent with one or more exemplary embodiments of the present disclosure. Referring to FIGS. 2A and 4C, in an exemplary embodiment, extracting first plurality of feature maps 406 may include obtaining a (1, l1+1)th plurality of feature maps 410 where 1≤l1≤L1. In an exemplary embodiment, first plurality of convolutional layers 404 may include a (1, l1)th convolutional layer 412. In an exemplary embodiment, obtaining (1, l1+1)th plurality of feature maps 410 may include applying (1, l1)th convolutional layer 412 on a (1, l1)th plurality of feature maps 413.

For further detail with regard to step 142, FIG. 1J shows a flowchart for obtaining a first plurality of feature maps, consistent with one or more exemplary embodiments of the present disclosure. In an exemplary embodiment, obtaining (1, l1+1)th plurality of feature maps 410 may include generating a (1, l1)th plurality of filtered feature maps (step 154), generating a (1, l1)th plurality of normalized feature maps from the (1, l1)th plurality of filtered feature maps (step 156), and generating (1, l1+1)th plurality of feature maps 410 from the (1, l1)th plurality of normalized feature maps (step 158).

Referring to FIGS. 1J, 4B, and 4C, in an exemplary embodiment, step 154 may include generating a (1, l1)th plurality of filtered feature maps 414. In an exemplary embodiment, (1, l1)th convolutional layer 412 may include a plurality of filters 416. In an exemplary embodiment, generating (1, l1)th plurality of filtered feature maps 414 may include applying plurality of filters 416 on (1, l1)th plurality of feature maps 413. In an exemplary embodiment, each of plurality of filters 416 may perform a convolution operation on a respective feature map of (1, l1)th plurality of feature maps 413. In an exemplary embodiment, a (1, 1)st plurality of feature maps may include subset 314 of first plurality of CSI samples 310.

In an exemplary embodiment, step 156 may include generating a (1, l1)th plurality of normalized feature maps 420. In an exemplary embodiment, (1, l1)th convolutional layer 412 may include an instance normalization process 422. In an exemplary embodiment, generating (1, l1)th plurality of normalized feature maps 420 may include applying instance normalization process 422 on (1, l1)th plurality of filtered feature maps 414. In an exemplary embodiment, each of (1, l1)th plurality of normalized feature maps 420 may be generated by applying instance normalization process 422 on a respective filtered feature map of (1, l1)th plurality of filtered feature maps 414. In an exemplary embodiment, instance normalization process 422 may normalize (1, l1)th plurality of filtered feature maps 414 by an average and a standard deviation of (1, l1)th plurality of filtered feature maps 414. In an exemplary embodiment, instance normalization process 422 may calculate the average and the standard deviation of (1, l1)th plurality of filtered feature maps 414 and all elements of (1, l1)th plurality of filtered feature maps 414 may be normalized in accordance to the average and the standard deviation. Therefore, in an exemplary embodiment, elements of (1, l1)th plurality of filtered feature maps 414 may follow a normal distribution, which may considerably reduce a required time for training neural network 400A.

In an exemplary embodiment, step 158 may include generating (1, l1+1)th plurality of feature maps 410. In an exemplary embodiment, (1, l1)th convolutional layer 412 may include a (1, l1)th non-linear activation function 424. In an exemplary embodiment, generating (1, l1+1)th plurality of feature maps 410 may include implementing (1, l1)th non-linear activation function 424 on each of (1, l1)th plurality of normalized feature maps 420. In an exemplary embodiment, implementing (1, l1)th non-linear activation function 424 may include implementing one of a rectified linear unit (ReLU) function or an exponential linear unit (ELU) function. In an exemplary embodiment, implementing (1, l1)th non-linear activation function 424 may include implementing other types of non-linear activation functions such as leaky ReLU, scaled ELU, parametric ReLU, etc.

Referring again to FIGS. 1I and 4B, in an exemplary embodiment, step 144 may include obtaining a first feature vector 426 by flattening first plurality of feature maps 406. In an exemplary embodiment, first plurality of feature maps 406 may include a 2D array. In an exemplary embodiment, obtaining first feature vector 426 may include implementing a flattening process 428 on first plurality of feature maps 406 in the 2D array. An exemplary flattening process may be referred to as generating a vector from an array by stacking columns of the array.

In exemplary embodiment, step 146 may include obtaining a second feature vector 429 by augmenting first feature vector 426. In an exemplary embodiment, first feature vector 426 may be augmented with a condition vector 430. An exemplary augmenting process may be referred to as increasing a length of a vector by adding a number of elements to the vector. As a result, in an exemplary embodiment, second feature vector 429 may be equal to elements of both first feature vector 426 and condition vector 430. An exemplary length of second feature vector 429 may be equal to a summation of a length of first feature vector 426 and a length of condition vector 430. In an exemplary embodiment, condition vector 430 may include zero valued entries except for a jth entry. An exemplary jth entry may include a non-zero value. In an exemplary embodiment, neural network 400A may generate only jth training image 403 of plurality of training images 402. In an exemplary embodiment, condition vector 430 may determine jth training image 403 to be generated by neural network 400A. In an exemplary embodiment, jth training image 403 may be determined by setting a jth entry of condition vector 430 to a non-zero value. In an exemplary embodiment, when a view of a jth video device 212 of plurality of vide devices 202 is desired, the jth entry of condition vector 430 may be set to a non-zero value. In an exemplary embodiment, an approximation of an image with a view of jth video device 212 may be obtained by setting the jth entry of condition vector 430 to a non-zero value. An exemplary non-zero value may be equal to one. In an exemplary embodiment, neural network 400A may be trained such that jth training image 403 may include an approximation of a jth video frame in ith set of video frames 304. An exemplary approximation may be performed by minimizing a mean squared error between jth training image 403 and the jth video frame, as described in detail of step 138. An exemplary jth video frame may be captured by jth video device 212.

In an exemplary embodiment, step 148 may include obtaining a second plurality of feature maps 432. FIG. 1K shows a flowchart for obtaining a second plurality of feature maps, consistent with one or more exemplary embodiments of the present disclosure. Referring to FIGS. 1K and 4B, in an exemplary embodiment, obtaining second plurality of feature maps 432 in step 148 may include feeding second feature vector 429 to an input of a first fully connected layer of neural network 400 (step 160), extracting a third feature vector from an output of the first fully connected layer (step 162), generating a first latent feature map from the third feature vector (step 164), obtaining a second latent feature map based on the first latent feature map (step 166), obtaining a fourth plurality of feature maps based on the second latent feature map (step 168), and generating the second plurality of feature maps from the fourth plurality of feature maps (step 170). In an exemplary embodiment, neural network 400A may include a first fully connected layer 433. An exemplary fully connected layer may be referred to a set of input neurons and a set of output neurons, and each input neuron is connected to every output neuron.

In an exemplary embodiment, step 160 may include feeding second feature vector 429 to an input of first fully connected layer 433. In an exemplary embodiment, the input of first fully connected layer 433 may include a number of neurons. Exemplary neurons may be set to a number of values. In an exemplary embodiment, feeding second feature vector 429 to the input of first fully connected layer 433 may include setting values of neurons in the input to values of entries in second feature vector 429.

In an exemplary embodiment, step 162 may include extracting a third feature vector 434 from an output of first fully connected layer 433. In an exemplary embodiment, first fully connected layer 433 may include a non-linear activation function. An exemplary non-linear activation function may include a leaky ReLU activation function.

In an exemplary embodiment, step 164 may include generating a first latent feature map 435 from third feature vector 434. In an exemplary embodiment, generating first latent feature map 435 may include implementing a deflattening process 436 on third feature vector 434. An exemplary deflattening process may be referred to as generating a matrix from a vector. In an exemplary embodiment, generating first latent feature map 435 may include generating a matrix from a plurality of elements in third feature vector 434. In an exemplary embodiment, an aspect ratio of first latent feature map 435 may be equal to an aspect ratio of jth training image 403.

In an exemplary embodiment, step 166 may include obtaining a second latent feature map 438. In an exemplary embodiment, obtaining second latent feature map 438 may include applying a padding process 440 on first latent feature map 435. In an exemplary embodiment, padding process 440 may include adding columns and rows to edges of first latent feature map 435. In an exemplary embodiment, padding process 440 may extend an area of first latent feature map 435 that are processed by filters of a convolutional layer. An exemplary padding process may include one of a zero padding process or a reflection padding process.

In an exemplary embodiment, step 168 may include obtaining a fourth plurality of feature maps 442. In an exemplary embodiment, obtaining fourth plurality of feature maps 442 may include applying a second plurality of convolutional layers 444 of neural network 400 on second latent feature map 438. In an exemplary embodiment, applying second plurality of convolutional layers 444 on second latent feature map 438 may be similar to applying first plurality of convolutional layers 404 on subset 314 of first plurality of CSI samples 310 in step 142.

In an exemplary embodiment, step 170 may include generating second plurality of feature maps 432. In an exemplary embodiment, generating second plurality of feature maps 432 may include applying a padding process 446 on each of fourth plurality of feature maps 442. In an exemplary embodiment, applying padding process 446 on each of fourth plurality of feature maps 442 may be similar to applying padding process 440 on first latent feature map 435 in step 160.

Referring again to FIGS. 1I and 4B, in an exemplary embodiment, step 150 may include obtaining a third plurality of feature maps 448. In an exemplary embodiment, neural network 400A may include a first residual neural network (ResNet). In an exemplary embodiment, obtaining third plurality of feature maps 448 may include applying the first ResNet on second plurality of feature maps 432. An exemplary first ResNet may include a plurality of ResNet blocks 450. In an exemplary embodiment, applying the first ResNet may include applying plurality of ResNet blocks 450 on second plurality of feature maps 432. In an exemplary embodiment, applying plurality of ResNet blocks 450 may include extracting third plurality of feature maps 448 from an output of an Lrth ResNet block 452 of plurality of ResNet blocks 450 where Lr is a number of plurality of ResNet blocks 450. In an exemplary embodiment, extracting third plurality of feature maps 448 may include obtaining an (lr+1)th plurality of residual feature maps. In an exemplary embodiment, extracting third plurality of feature maps 448 may include applying an lrth ResNet block of plurality of ResNet blocks 450 on an lrth plurality of residual feature maps where 1≤lr≤Lr. In an exemplary embodiment, a first (1st) plurality of residual feature maps may include second plurality of feature maps 432. In an exemplary embodiment, each of plurality of ResNet blocks 450 may include two cascaded convolutional layers and a residual connection. An exemplary residual connection may connect an input layer to an output layer while bypassing intermediate layers. In an exemplary embodiment, plurality of ResNet blocks 450 may be added to neural network 400A to avoid a vanishing gradient problem or to mitigate an accuracy saturation problem.

For further detail with respect to step 152, FIG. 1L shows a flowchart for upsampling a plurality of feature maps, consistent with one or more exemplary embodiments of the present disclosure. Referring to FIGS. 1L and 4B, in an exemplary embodiment, step 152 may include upsampling third plurality of feature maps 448. An exemplary upsampling process may include an upsampling layer followed by a convolutional layer. An exemplary upsampling layer may repeat rows and columns of an input feature map. In an exemplary embodiment, upsampling third plurality of feature maps 448 may include extracting jth training image 403 from an output of a (3, L3)th convolutional layer 454 of a third plurality of convolutional layers 456 of neural network 400A where L3 is a number of third plurality of convolutional layers 456. In an exemplary embodiment, extracting jth training image 403 may include obtaining a (3, l3+1)th plurality of feature maps where 1≤l3≤L3. In an exemplary embodiment, obtaining the (3, l3+1)th plurality of feature maps may include generating an l3th plurality of upsampled feature maps (step 172), and applying a (3, l3)th convolutional layer of third plurality of convolutional layers 456 on the l3th plurality of upsampled feature maps (step 174).

In an exemplary embodiment, step 172 may include generating an l3th plurality of upsampled feature maps. In an exemplary embodiment, generating the l3th plurality of upsampled feature maps may include implementing an upsampling process 458 on a (3, l3)th plurality of feature maps. In an exemplary embodiment, a (3, 1)st plurality of feature maps may include third plurality of feature maps 448. In an exemplary embodiment, implementing upsampling process 458 may include adding rows and/or columns to each of the (3, l3)th plurality of feature maps. As a result, in an exemplary embodiment, a size of l3th plurality of upsampled feature maps may be larger than a size of the (3, l3)th plurality of feature maps. In an exemplary embodiment, upsampling process 458 may include upsampling each of the (3, l3)th plurality of feature maps by a factor of two.

In an exemplary embodiment, step 174 may include applying a (3, l3)th convolutional layer 460 of third plurality of convolutional layers 456 on the l3th plurality of upsampled feature maps. In an exemplary embodiment, applying a (3, l3)th convolutional layer 460 on the l3th plurality of upsampled feature maps may be similar to applying (1, l1)th convolutional layer 412 on a (1, l1)th plurality of feature maps 413 in step 142.

Referring again to FIG. 1H, step 136 may include extracting at least one of plurality of training images 402. For further detail with regard to step 136, FIG. 1M shows a flowchart for applying a second implementation of a neural network on a subset of a plurality of CSI samples, consistent with one or more exemplary embodiments of the present disclosure. In an exemplary embodiment, a method 136B may include a second implementation of step 136.

FIG. 4D shows a schematic of a second implementation of a neural network, consistent with one or more exemplary embodiments of the present disclosure. Referring to FIGS. 1M, and 3-4D, in an exemplary embodiment, neural network 400B may include a second implementation of neural network 400. In an exemplary embodiment, extracting the at least one of plurality of training images 402 from neural network 400 in step 136 may include extracting plurality of training images 402 from neural network 400B in step 136B, as described below. In an exemplary embodiment, extracting plurality of training images 402 may include applying neural network 400B on subset 314 of first plurality of CSI samples 310. In an exemplary embodiment, applying neural network 400B on subset 314 may include obtaining a fifth plurality of feature maps (step 176) and applying a plurality of sub-networks of neural network 400B on the fifth plurality of feature maps (step 178).

For further detail with respect to step 176, obtaining a fifth plurality of feature maps 462 may include applying a fourth plurality of convolutional layers 464 of neural network 400B on subset 314. In an exemplary embodiment, applying fourth plurality of convolutional layers 464 on subset 314 may be similar to applying first plurality of convolutional layers 404 on subset 314 in step 142.

In further detail regarding step 178, FIG. 4E shows a schematic of a sub-network, consistent with one or more exemplary embodiments of the present disclosure. Referring to FIGS. 1M and 4E, in an exemplary embodiment, step 178 may include applying a plurality of sub-networks 466 of neural network 400B on fifth plurality of feature maps 462. In an exemplary embodiment, applying plurality of sub-networks 466 on fifth plurality of feature maps 462 may include applying a jth sub-network 468 of plurality of sub-networks 466 on fifth plurality of feature maps 462. In an exemplary embodiment, jth sub-network 468 may include a cascade of a fully connected layer, a plurality of convolution layers, a plurality of ResNet blocks, and a plurality of upsampling layers. An exemplary fully connected layer of jth sub-network 468 may be similar to fully connected layer 433, that is, configurations of neurons in both fully connected layers are the same. Exemplary plurality of convolution layers may be similar to plurality of convolutional layers 444. Exemplary plurality of ResNet blocks may be similar to plurality of ResNet blocks 450. Each of exemplary plurality of upsampling layers may include an upsampling process similar to upsampling process 458 followed by a convolutional layer. In an exemplary embodiment, jth sub-network 468 may generate a jth training image 470 of plurality of training images 402. In an exemplary embodiment, jth training image 470 may include an approximation of a jth video frame in ith set of video frames 304. An exemplary jth video frame may be captured by jth video device 212.

In further detail regarding step 178, FIG. 1N shows a flowchart for applying a sub-network on a plurality of feature maps, consistent with one or more exemplary embodiments of the present disclosure. Referring to FIGS. 1N, 4B, and 4E, in an exemplary embodiment, applying jth sub-network 468 on fifth plurality of feature maps 462 may include obtaining a sixth plurality of feature maps (step 180), obtaining a seventh plurality of feature maps by applying a ResNet on the sixth plurality of feature maps (step 182), and upsampling the seventh plurality of feature maps (step 184).

In an exemplary embodiment step 180 may include obtaining a sixth plurality of feature maps 472 based on fifth plurality of feature maps 462. In an exemplary embodiment, obtaining sixth plurality of feature maps 472 based on fifth plurality of feature maps 462 may be similar to obtaining second plurality of feature maps 432 based on second feature vector 429 in step 148.

In an exemplary embodiment, step 182 may include obtaining a seventh plurality of feature maps 474 by applying a second ResNet 476 of jth sub-network 468 on sixth plurality of feature maps 472. In an exemplary embodiment, applying second ResNet 476 on sixth plurality of feature maps 472 may be similar to applying plurality of ResNet blocks 450 on second plurality of feature maps 432 in step 150.

In an exemplary embodiment, step 184 may include upsampling seventh plurality of feature maps 474. In an exemplary embodiment, upsampling seventh plurality of feature maps 474 may be similar to upsampling third plurality of feature maps 448 in step 152. In an exemplary embodiment, seventh plurality of feature maps 474 may be upsampled utilizing an upsampling block 478. In an exemplary embodiment, upsampling block 478 may implement a plurality of upsampling processes on plurality of feature maps 474. Each of exemplary processes of upsampling block 478 may be similar to upsampling process 458. In an exemplary embodiment, upsampling block 478 may include a plurality of convolutional layers. Exemplary plurality of convolutional layers of upsampling block 478 may be similar to third plurality of convolutional layers 456.

Referring again to FIGS. 1H and 4A in an exemplary embodiment, step 138 may include generating a plurality of updated weights. In an exemplary embodiment, generating the plurality of updated weights may include minimizing a loss function of plurality of training images 402 and ith set of video frames 304. An exemplary loss function may include a mean squared error of training image 402 and video frame 304. In an exemplary embodiment, minimizing the loss function may be performed by a gradient descent method. An exemplary gradient descent method may include generating a plurality of adjustment values. Each of exemplary plurality of adjustment values may be proportional to a gradient of the loss function with respect to each of the plurality of initial weights. Exemplary plurality of adjustment values may be obtained by a back propagation algorithm. Exemplary plurality of updated weights may be obtained by adding each of the plurality of adjustment values to a respective initial weight of the plurality of initial weights.

In an exemplary embodiment, step 140 may include replacing the plurality of updated weights with the plurality of initial weights. An exemplary value of the loss function may be minimized by the plurality of updated weights. In an exemplary embodiment, in following iterations of the iterative process, the loss function may be minimized by calculating a gradient of the loss function with respect to the plurality of updated weights instead of the plurality of initial weights.

Referring again to FIGS. 1A, and 2-4A, in an exemplary embodiment, step 108 may include obtaining a third plurality of samples in the second frequency band. In an exemplary embodiment, obtaining the third plurality of samples may include obtaining a second plurality of CSI samples. In an exemplary embodiment, the second plurality of CSI samples may be obtained utilizing communication system 204. In an exemplary embodiment, obtaining the second plurality of CSI samples may be similar to obtaining subset 312 of first plurality of CSI samples 310 in step 104.

In an exemplary embodiment, step 110 may include obtaining the at least one of the plurality of data samples based on the mapping model and the third plurality of samples. In an exemplary embodiment, obtaining the at least one of the plurality of data samples may include obtaining the at least one of plurality of images 208 based on the mapping model and the second plurality of CSI samples. In an exemplary embodiment, at least one of plurality of images 208 may be obtained utilizing processor 206. In an exemplary embodiment, obtaining the at least one of plurality of images 208 may include applying the mapping model on the second plurality of CSI samples. In an exemplary embodiment, applying the mapping model on the second plurality of CSI samples may include applying neural network 400 on the second plurality of CSI samples. In an exemplary embodiment, applying neural network 400 on the second plurality of CSI samples may be similar to applying neural network 400 on subset 312 of first plurality of CSI samples 310 in step 136. In an exemplary embodiment, obtaining the at least one of plurality of images 208 may include obtaining a jth image of plurality of images 208. In an exemplary embodiment, obtaining the jth image may include applying neural network 400A on subset 312 of first plurality of CSI samples 310. In an exemplary embodiment, obtaining the at least one of plurality of images 208 may include obtaining plurality of images 208. In an exemplary embodiment, obtaining plurality of images 208 may include applying neural network 400B on subset 312 of first plurality of CSI samples 310.

FIG. 5 shows an example computer system 500 in which an embodiment of the present invention, or portions thereof, may be implemented as computer-readable code, consistent with exemplary embodiments of the present disclosure. For example, different steps of method 100 may be implemented in computer system 500 using hardware, software, firmware, tangible computer readable media having instructions stored thereon, or a combination thereof and may be implemented in one or more computer systems or other processing systems. Hardware, software, or any combination of such may embody any of the modules and components in FIGS. 1A-4E.

If programmable logic is used, such logic may execute on a commercially available processing platform or a special purpose device. One ordinary skill in the art may appreciate that an embodiment of the disclosed subject matter can be practiced with various computer system configurations, including multi-core multiprocessor systems, minicomputers, mainframe computers, computers linked or clustered with distributed functions, as well as pervasive or miniature computers that may be embedded into virtually any device.

For instance, a computing device having at least one processor device and a memory may be used to implement the above-described embodiments. A processor device may be a single processor, a plurality of processors, or combinations thereof. Processor devices may have one or more processor “cores.”

An embodiment of the invention is described in terms of this example computer system 500. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the invention using other computer systems and/or computer architectures. Although operations may be described as a sequential process, some of the operations may in fact be performed in parallel, concurrently, and/or in a distributed environment, and with program code stored locally or remotely for access by single or multi-processor machines. In addition, in some embodiments the order of operations may be rearranged without departing from the spirit of the disclosed subject matter.

Processor device 504 may be a special purpose (e.g., a graphical processing unit) or a general-purpose processor device. As will be appreciated by persons skilled in the relevant art, processor device 504 may also be a single processor in a multi-core/multiprocessor system, such system operating alone, or in a cluster of computing devices operating in a cluster or server farm. Processor device 504 may be connected to a communication infrastructure 506, for example, a bus, message queue, network, or multi-core message-passing scheme.

In an exemplary embodiment, computer system 500 may include a display interface 502, for example a video connector, to transfer data to a display unit 530, for example, a monitor. Computer system 500 may also include a main memory 508, for example, random access memory (RAM), and may also include a secondary memory 510. Secondary memory 510 may include, for example, a hard disk drive 512, and a removable storage drive 514. Removable storage drive 514 may include a floppy disk drive, a magnetic tape drive, an optical disk drive, a flash memory, or the like. Removable storage drive 514 may read from and/or write to a removable storage unit 518 in a well-known manner. Removable storage unit 518 may include a floppy disk, a magnetic tape, an optical disk, etc., which may be read by and written to by removable storage drive 514. As will be appreciated by persons skilled in the relevant art, removable storage unit 518 may include a computer usable storage medium having stored therein computer software and/or data.

In alternative implementations, secondary memory 510 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 500. Such means may include, for example, a removable storage unit 522 and an interface 520. Examples of such means may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units 522 and interfaces 520 which allow software and data to be transferred from removable storage unit 522 to computer system 500.

Computer system 500 may also include a communications interface 524. Communications interface 524 allows software and data to be transferred between computer system 500 and external devices. Communications interface 524 may include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, or the like. Software and data transferred via communications interface 524 may be in the form of signals, which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface 524. These signals may be provided to communications interface 524 via a communications path 526. Communications path 526 carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link or other communications channels.

In this document, the terms “computer program medium” and “computer usable medium” are used to generally refer to media such as removable storage unit 518, removable storage unit 522, and a hard disk installed in hard disk drive 512. Computer program medium and computer usable medium may also refer to memories, such as main memory 508 and secondary memory 510, which may be memory semiconductors (e.g. DRAMs, etc.).

Computer programs (also called computer control logic) are stored in main memory 508 and/or secondary memory 510. Computer programs may also be received via communications interface 524. Such computer programs, when executed, enable computer system 500 to implement different embodiments of the present disclosure as discussed herein. In particular, the computer programs, when executed, enable processor device 504 to implement the processes of the present disclosure, such as the operations in method 100 illustrated by flowcharts of FIGS. 1A-1N discussed above. Accordingly, such computer programs represent controllers of computer system 500. Where an exemplary embodiment of method 100 is implemented using software, the software may be stored in a computer program product and loaded into computer system 500 using removable storage drive 514, interface 520, and hard disk drive 512, or communications interface 524.

Embodiments of the present disclosure also may be directed to computer program products including software stored on any computer useable medium. Such software, when executed in one or more data processing device, causes a data processing device to operate as described herein. An embodiment of the present disclosure may employ any computer useable or readable medium. Examples of computer useable mediums include, but are not limited to, primary storage devices (e.g., any type of random access memory), secondary storage devices (e.g., hard drives, floppy disks, CD ROMS, ZIP disks, tapes, magnetic storage devices, and optical storage devices, MEMS, nanotechnological storage device, etc.).

The embodiments have been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.

Example

In this example, a performance of a method (similar to method 100) for generating images from CSI samples is demonstrated. Different steps of the method are implemented utilizing a system (similar to system 200). The system includes a Wi-Fi system (similar to communication system 204). Each access point (similar to transmitter 204A and receive 204B) of the Wi-Fi system includes 3 antennas, that is, Mt=Mr=3. A bandwidth of the Wi-Fi System is about 20 MHz, divided into 56 sub-carriers, that is, N2=56. A subset of a plurality of CSI samples (similar to subset 312 of plurality of CSI samples 310) includes 25 CSI sub-arrays, that is, N3=25. A subset of CSI samples (similar to subset 314) is obtained by randomly selecting 15 CSI sub-arrays out of 25 CSI sub-arrays, that is, N4=15. Three cameras (similar to plurality of video devices 202) capture video data and convert the video data into video frames (similar to plurality of video frames 308) at 30 frames per second (fps) with 640×480 resolution. Cameras are arranged in a room (similar to environment 210) such that different views of the room are covered. Dimensions of video frames are reduced by a factor of two. Video frames are downsampled by five, resulting in 6 fps, that is, a time interval (similar to time interval 306) between two consecutive frames is about 166.7 msec.

A first neural network (first NN, similar to neural network 400A) is trained for generating images from CSI samples. A number of a first plurality of convolutional layers (similar to first plurality of convolutional layers 404) is equal to six, that is, L1=6. Numbers of plurality of filters in different convolutional layers are 18, 64, 128, 256, 512, and 512, respectively. A size of each of first plurality of feature maps (similar to first plurality of feature maps 406) is 5×1. As a result, a size a first feature vector (similar to first feature vector 426) is equal to 512×5=2560. A condition vector (similar to condition vector 430) is augmented with the first feature vector. A size of the condition vector is equal to three, that is, a number of the cameras or video capturing devices. As a result, a size of a second feature vector (similar to second feature vector 429) is equal to 2563. A size of a third feature vector (similar to third feature vector 434) is equal to 972, from which a first latent feature map (similar to first latent feature map 435) of size 36×27 is generated. A second latent feature map (similar to second latent feature map 438) of size 38×29 is generated by a padding process (similar to padding process 440). A number of a second plurality of convolutional layers (similar to second plurality of convolutional layers 444) may be equal to three, that is, L2=3, with 32, 64, and 128 filters. A size of each of fourth plurality of feature maps (similar to fourth plurality of feature maps 442) is equal to 8×6. A size of each of plurality of second feature maps (similar to plurality of second feature maps 432) is equal to 10×8. A number of a plurality of ResNet blocks (similar to plurality ResNet blocks 450) is equal to three, each with 128 filters. A size of each of a third plurality of feature maps (similar to third plurality of feature maps 448) is equal to 8×6. A number of a third plurality of convolutional layers (similar to third plurality of convolutional layers 456) is equal to seven, that is, L3=7, with 128, 64, 32, 16, 8, 4, and 2 filters. An output of the third plurality of convolutional layers may be cropped to a size of 320×240.

A second neural network (second NN, similar to neural network 400B, that is, configurations of elements in both networks are the same) is also trained for generating images from CSI samples. A fourth plurality of convolutional layers (similar to fourth plurality of convolutional layers 464) are similar to the first plurality of convolutional layers. An output of the fourth plurality of convolutional layers is fed to three sub-networks (similar to plurality of sub-networks 466). A jth sub-network (similar to jth sub-network 468) generates data of a jth training image (similar to jth training image 470). A structure of each sub-network is similar to a remaining part of the first NN including the first NN except for the first convolutional layers.

Performances of the first NN and the second NN are evaluated by defining two metrics. A subject is approximated in generated frames (similar to each of plurality of images 208) and in target frames (similar to plurality of video frames 302) by their bounding box (BB), denoted by BBg and BBt, respectively. A first metric is referred to as subject overlap (SO), defined by the following.

SO = A ( overlap ( B B g , BB t ) ) min { A ( B B g ) , A ( B B t ) } Equation ( 4 )

where A(BB) returns a number of pixels in BB. A second metric referred to as subject size (SS), defined by the following:

SS = min { A ( B B g ) A ( B B t ) , A ( B B t ) A ( B B g ) } Equation ( 5 )

Both metrics are always in a range of [0, 1] and are equal to one for a complete match. SO and SS quantify displacement and size mismatch of a subject in the generated frame, which capture position error of the subject in width and height. Average and standard deviation values of SO and SS for both the first NN and the second NN are in Table 1.

TABLE 1 Cross-validation metrics for the first NN and the second NN View 1 View 2 View 3 SO × 100 First NN 81.43 + 2.41 92.77 + 1.51 87.69 + 2.72  Second NN 83.89 + 2.16 92.41 + 1.44 82.91 + 13.86 SS × 100 First NN 76.50 + 1.72 86.47 + 1.55 81.86 + 2.39  Second NN 79.67 + 1.78 85.70 + 1.52 68.99 + 26.02

For all the results in Table 1, all values of average metrics are higher than 0.5, indicating an acceptable performance of the first NN and the second NN. In addition, the first NN slightly outperforms the second NN in two out of three camera views. This happens because the second NN is larger than the first NN. As a result, the second NN requires a larger dataset to achieve the same performance as that of the first NN.

While the foregoing has described what may be considered to be the best mode and/or other examples, it is understood that various modifications may be made therein and that the subject matter disclosed herein may be implemented in various forms and examples, and that the teachings may be applied in numerous applications, only some of which have been described herein. It is intended by the following claims to claim any and all applications, modifications and variations that fall within the true scope of the present teachings.

Unless otherwise stated, all measurements, values, ratings, positions, magnitudes, sizes, and other specifications that are set forth in this specification, including in the claims that follow, are approximate, not exact. They are intended to have a reasonable range that is consistent with the functions to which they relate and with what is customary in the art to which they pertain.

The scope of protection is limited solely by the claims that now follow. That scope is intended and should be interpreted to be as broad as is consistent with the ordinary meaning of the language that is used in the claims when interpreted in light of this specification and the prosecution history that follows and to encompass all structural and functional equivalents. Notwithstanding, none of the claims are intended to embrace subject matter that fails to satisfy the requirement of Sections 101, 102, or 103 of the Patent Act, nor should they be interpreted in such a way. Any unintended embracement of such subject matter is hereby disclaimed.

Except as stated immediately above, nothing that has been stated or illustrated is intended or should be interpreted to cause a dedication of any component, step, feature, object, benefit, advantage, or equivalent to the public, regardless of whether it is or is not recited in the claims.

It will be understood that the terms and expressions used herein have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein. Relational terms such as first and second and the like may be used solely to distinguish one entity or action from another without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “a” or “an” does not, without further constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element.

The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various implementations. This is for purposes of streamlining the disclosure, and is not to be interpreted as reflecting an intention that the claimed implementations require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed implementation. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.

While various implementations have been described, the description is intended to be exemplary, rather than limiting and it will be apparent to those of ordinary skill in the art that many more implementations and implementations are possible that are within the scope of the implementations. Although many possible combinations of features are shown in the accompanying figures and discussed in this detailed description, many other combinations of the disclosed features are possible. Any feature of any implementation may be used in combination with or substituted for any other feature or element in any other implementation unless specifically restricted. Therefore, it will be understood that any of the features shown and/or discussed in the present disclosure may be implemented together in any suitable combination. Accordingly, the implementations are not to be restricted except in light of the attached claims and their equivalents. Also, various modifications and changes may be made within the scope of the attached claims.

Claims

1. A method for transferring information between different frequency bands by generating an image from channel state information (CSI) samples in a communication system, the method comprising:

capturing, utilizing one or more processors, video data associated with a plurality of video frames by capturing video data associated with an ith set of video frames of the plurality of video frames where i≥1, video data associated with each video frame in the ith set of video frames extracted from a respective video device in a set of video devices;
obtaining, utilizing the communication system, a first plurality of CSI samples by obtaining an ith subset of the first plurality of CSI samples simultaneously with capturing video data associated with the ith set of video frames;
obtaining, utilizing the one or more processors, a mapping model by training a neural network, comprising: initializing the neural network with a plurality of initial weights; and repeating an iterative process until a termination condition is satisfied, the iterative process comprising: extracting a jth training image of a plurality of training images from an output of the neural network by applying the neural network on the ith subset of the first plurality of CSI samples, comprising: obtaining a first plurality of feature maps by applying a first plurality of convolutional layers of the neural network on the ith subset of the first plurality of CSI samples; obtaining a first feature vector by flattening the first plurality of feature maps; obtaining a second feature vector by augmenting the first feature vector with a condition vector comprising zero valued entries except for a jth entry associated with a jth video device of the plurality of devices where 1≤j≤N1 and N1 is a number of the plurality of video devices; obtaining a second plurality of feature maps by feeding the second feature vector to an input of a fully connected layer of the neural network; obtaining a third plurality of feature maps by applying a residual neural network (ResNet) of the neural network on the second plurality of feature maps; and upsampling the third plurality of feature maps; generating a plurality of updated weights by minimizing a loss function of the at least one the plurality of training images and the ith set of video frames; and replacing the plurality of updated weights with the plurality of initial weights;
obtaining, utilizing the communication system, a second plurality of CSI samples; and
obtaining, utilizing the one or more processors, the image by applying the mapping model on the second plurality of CSI samples.

2. A method for transferring information between different frequency bands by generating at least one of a plurality of data samples in a first frequency band from measurements in a second frequency band, the method comprising:

obtaining, utilizing a plurality of measuring systems, a first plurality of samples by measuring a plurality of frequency responses of an environment in the first frequency band;
obtaining, utilizing a set of measuring systems, a second plurality of samples by measuring a first set of frequency responses of the environment in the second frequency band;
obtaining, utilizing one or more processors, a mapping model based on the first plurality of samples and the second plurality of samples;
obtaining, utilizing the set of measuring systems, a third plurality of samples by measuring a second set of frequency responses of the environment in the second frequency band; and
obtaining, utilizing the one or more processors, the at least one of the plurality of data samples by applying the mapping model on the third plurality of samples.

3. The method of claim 2, wherein:

measuring the plurality of frequency responses comprises capturing, utilizing the one or more processors, video data associated with a plurality of video frames by capturing video data associated with an ith set of video frames of the plurality of video frames in an ith time interval where i≥1, video data associated with each video frame in the ith set of video frames extracted from a respective video device of a plurality of video devices;
measuring the first set of frequency responses comprises obtaining, utilizing a communication system, a first plurality of channel state information (CSI) samples by obtaining an ith subset of the first plurality of CSI samples in the ith time interval;
measuring the second set of frequency responses comprises obtaining a second plurality of CSI samples; and
obtaining the at least one of the plurality of data samples comprises obtaining at least one of a plurality of images.

4. The method of claim 3, wherein obtaining the ith subset of the first plurality of CSI samples comprises:

transmitting a plurality of transmit signals by a transmitter of the communication system;
receiving a plurality of receive signals associated with the plurality of transmit signals by a receiver of the communication system; and
generating the ith subset of the first plurality of CSI samples by: obtaining a plurality of raw CSI samples based on the plurality of receive signals; and extracting the ith subset of the first plurality of CSI samples from the plurality of raw CSI samples by compensating a phase offset of each of the plurality of raw CSI samples.

5. The method of claim 4, wherein: N 1 2 × 1 by estimating a respective multiple-input multiple output (MIMO) channel of a plurality of MIMO channels by processing a respective subset of the plurality of receive signals, each of the plurality of MIMO channels comprising a wireless channel between the transmitter and the receiver in a respective sub-carrier of the plurality of sub-carriers and at a respective receive moment of the plurality of receive moments; and

transmitting the plurality of transmit signals comprises transmitting each of the plurality of transmit signals in a respective sub-carrier of a plurality of sub-carriers and at a respective transmit moment of a plurality of transmit moments in the ith time interval by a respective transmit antenna of a plurality of transmit antennas associated with the transmitter;
receiving the plurality of receive signals comprises receiving each of the plurality of receive signals in a respective sub-carrier of the plurality of sub-carriers and at a respective receive moment of a plurality of receive moments in the ith time interval by a respective receive antenna of a plurality of receive antennas associated with the receiver; and
obtaining the plurality of raw CSI samples comprises obtaining a CSI array of size N1×N2×N3 by: generating each of a plurality of CSI vectors of size
generating the CSI array by setting each element in the CSI array to one of a real part or an imaginary part of a respective element in a respective CSI vector of the plurality of CSI vectors, wherein: N1=2MtMr, Mt is a number of the plurality of transmit antennas, Mr is a number of the plurality of receive antennas, N2 is a number of the plurality of sub-carriers, and N3 is a number of the plurality of receive moments.

6. The method of claim 5, wherein obtaining the plurality of raw CSI samples further comprises extracting a CSI sub-array of size N1×N2×N4 from the CSI array where 1≤N4<N3 by:

randomly selecting N4 sub-arrays of size N1×N2 out of N3 sub-arrays of size N1×N2 in the CSI array; and
generating the CSI sub-array by stacking N4 sub-arrays.

7. The method of claim 3, wherein obtaining the mapping model comprises training a neural network by:

initializing the neural network with a plurality of initial weights; and
repeating an iterative process until a termination condition is satisfied, the iterative process comprising: extracting at least one of a plurality of training images from an output of the neural network by applying the neural network on the ith subset of the first plurality of CSI samples; generating a plurality of updated weights by minimizing a loss function of the at least one of the plurality of training images and the ith set of video frames; and replacing the plurality of updated weights with the plurality of initial weights.

8. The method of claim 7, wherein applying the neural network on the ith subset of the first plurality of CSI samples comprises:

obtaining a first plurality of feature maps by applying a first plurality of convolutional layers of the neural network on the ith subset of the first plurality of CSI samples;
obtaining a first feature vector by flattening the first plurality of feature maps;
obtaining a second feature vector by augmenting the first feature vector with a condition vector comprising zero valued entries except for a jth entry comprising a non-zero value and associated with a jth video device of the plurality of video devices where 1≤j≤N1 and N1 is a number of the plurality of video devices;
obtaining a second plurality of feature maps by feeding the second feature vector to an input of a first fully connected layer of the neural network;
obtaining a third plurality of feature maps by applying a first ResNet of the neural network on the second plurality of feature maps; and
upsampling the third plurality of feature maps.

9. The method of claim 8, wherein obtaining the second plurality of feature maps further comprises:

extracting a third feature vector from an output of the first fully connected layer;
generating a first latent feature map by generating a matrix from a plurality of elements in the third feature vector;
obtaining a second latent feature map by applying a padding process on the first latent feature map;
obtaining a fourth plurality of feature maps by applying a second plurality of convolutional layers of the neural network on the second latent feature map by extracting the fourth plurality of feature maps from an output of a (2, L2)th convolutional layer of the second plurality of convolutional layers where L2 is a number of the second plurality of convolutional layers, extracting the fourth plurality of feature maps comprising obtaining a (2, l2+1)th plurality of feature maps where 1≤l2≤L2 by: generating a (2, l2)th plurality of filtered feature maps by applying a (2, l2)th plurality of filters on a (2, l2)th plurality of feature maps, a (2, 1)st plurality of feature maps comprising the second latent feature map; generating a (2, l2)th plurality of normalized feature maps by applying the instance normalization process on the (2, l2)th plurality of filtered feature maps; and generating the (2, l2+1)th plurality of feature maps by implementing a (2, l2)th non-linear activation function on each of the (2, l2)th plurality of normalized feature maps; and
generating the second plurality of feature maps by applying the padding process on each of the fourth plurality of feature maps.

10. The method of claim 8, wherein upsampling the third plurality of feature maps comprises extracting a jth training image of the plurality of training images from an output of a (3, L3)th convolutional layer of a third plurality of convolutional layers of the neural network where L3 is a number of the third plurality of convolutional layers, extracting the jth training image comprising obtaining a (3, l3+1)th plurality of feature maps where 1≤l3≤L3 by:

generating an l3th plurality of upsampled feature maps by implementing an upsampling process on a (3, l3)th plurality of feature maps, a (3, 1)st plurality of feature maps comprising the third plurality of feature maps;
generating a (3, l3)th plurality of filtered feature maps by applying a (3, l3)th plurality of filters on the l3th plurality of upsampled feature maps;
generating a (3, l3)th plurality of normalized feature maps by applying the instance normalization process on the (3, l3)th plurality of filtered feature maps; and
generating the (3, l3+1)th plurality of feature maps by implementing a (3, l3)th non-linear activation function on each of the (3, l3)th plurality of normalized feature maps.

11. The method of claim 7, wherein applying the neural network on the ith subset of the first plurality of CSI samples comprises:

obtaining a fifth plurality of feature maps by applying a fourth plurality of convolutional layers of the neural network on the ith subset of the first plurality of CSI samples; and
applying a plurality of sub-networks of the neural network on the fifth plurality of feature maps by applying a jth sub-network of the plurality of sub-networks on the fifth plurality of feature maps, the jth sub-network associated with a jth video device of the plurality of video devices where 1≤j≤N1 and N1 is a number of the plurality of video devices, applying the jth sub-network comprising: obtaining a sixth plurality of feature maps by feeding the fifth plurality of feature maps to an input of a second fully connected layer of the jth sub-network; obtaining a seventh plurality of feature maps by applying a second ResNet of the jth sub-network on the sixth plurality of feature maps; and upsampling the seventh plurality of feature maps.

12. The method of claim 11, wherein obtaining the sixth plurality of feature maps further comprises:

extracting a fourth feature vector from an output of the second fully connected layer;
generating a third latent feature map by generating a matrix from a plurality of elements in the fourth feature vector;
obtaining a fourth latent feature map by applying a padding process on the third latent feature map;
obtaining an eighth plurality of feature maps by applying a fifth plurality of convolutional layers of the jth sub-network on the fourth latent feature map by extracting the eighth plurality of feature maps from an output of a (5, L5)th convolutional layer of the fifth plurality of convolutional layers where L5 is a number of the fifth plurality of convolutional layers, extracting the eighth plurality of feature maps comprising obtaining a (5, l5+1)th plurality of feature maps where 1≤l5≤L5 by: generating a (5, l5)th plurality of filtered feature maps by applying a (5, l5)th plurality of filters on a (5, l5)th plurality of feature maps, a (5, 1)st plurality of feature maps comprising the fourth latent feature map; generating a (5, l5)th plurality of normalized feature maps by applying the instance normalization process on the (5, l5)th plurality of filtered feature maps; and generating the (5, l5+1)th plurality of feature maps by implementing a (5, l5)th non-linear activation function on each of the (5, l5)th plurality of normalized feature maps; and
generating the sixth plurality of feature maps by applying the padding process on each of the eighth plurality of feature maps.

13. The method of claim 11, wherein upsampling the seventh plurality of feature maps comprises extracting a jth training image of the plurality of training images from an output of a (6, L6)th convolutional layer of a sixth plurality of convolutional layers of the jth sub-network where L6 is a number of the sixth plurality of convolutional layers, extracting the jth training image comprising obtaining a (6, l6+1)th plurality of feature maps where 1≤l6≤L6 by:

generating an l6th plurality of upsampled feature maps by implementing an upsampling process on a (6, l6)th plurality of feature maps, a (6, 1)st plurality of feature maps comprising the seventh plurality of feature maps;
generating a (6, l6)th plurality of filtered feature maps by applying a (6, l6)th plurality of filters on the l6th plurality of upsampled feature maps;
generating a (6, l6)th plurality of normalized feature maps by applying the instance normalization process on the (6, l6)th plurality of filtered feature maps; and
generating the (6, l6+1)th plurality of feature maps by implementing a (6, l6)th non-linear activation function on each of the (6, l6)th plurality of normalized feature maps.

14. A system for transferring information between different frequency bands by generating at least one of a plurality of images from channel state information (CSI) samples in a communication system, the system comprising:

a memory having processor-readable instructions stored therein; and
one or more processors configured to access the memory and execute the processor-readable instructions, which, when executed by the one or more processors configures the one or more processors to perform a method, the method comprising: capturing video data associated with a plurality of video frames by capturing video data associated with an ith set of video frames of the plurality of video frames in an ith time interval where i≥1, video data associated with each video frame in the ith set of video frames extracted from a respective video device of a plurality of video devices; obtaining, utilizing the communication system, a first plurality of CSI samples by obtaining an ith subset of the first plurality of CSI samples in the ith time interval; obtaining a mapping model based on the plurality of video frames and the first plurality of CSI samples; obtaining, utilizing the communication system, a second plurality of CSI samples; and obtaining the at least one of the plurality of images by applying the mapping model on the second plurality of CSI samples.

15. The system of claim 14, wherein obtaining the mapping model comprises training a neural network by:

initializing the neural network with a plurality of initial weights; and
repeating an iterative process until a termination condition is satisfied, the iterative process comprising: extracting at least one of a plurality of training images from an output of the neural network by applying the neural network on the ith subset of the first plurality of CSI samples; generating a plurality of updated weights by minimizing a loss function of the at least one of the plurality of training images and the ith set of video frames; and replacing the plurality of updated weights with the plurality of initial weights.

16. The system of claim 15, wherein applying the neural network on the ith subset of the first plurality of CSI samples comprises:

obtaining a first plurality of feature maps by applying a first plurality of convolutional layers of the neural network on the ith subset of the first plurality of CSI samples;
obtaining a first feature vector by flattening the first plurality of feature maps;
obtaining a second feature vector by augmenting the first feature vector with a condition vector comprising zero valued entries except for a jth entry comprising a non-zero value and associated with a jth video device of the plurality of video devices where 1≤j≤N1 and N1 is a number of the plurality of video devices;
obtaining a second plurality of feature maps by feeding the second feature vector to an input of a first fully connected layer of the neural network;
obtaining a third plurality of feature maps by applying a first ResNet of the neural network on the second plurality of feature maps; and
upsampling the third plurality of feature maps.

17. The system of claim 16, wherein obtaining the second plurality of feature maps further comprises:

extracting a third feature vector from an output of the first fully connected layer;
generating a first latent feature map by generating a matrix from a plurality of elements in the third feature vector;
obtaining a second latent feature map by applying a padding process on the first latent feature map;
obtaining a fourth plurality of feature maps by applying a second plurality of convolutional layers of the neural network on the second latent feature map by extracting the fourth plurality of feature maps from an output of a (2, L2)th convolutional layer of the second plurality of convolutional layers where L2 is a number of the second plurality of convolutional layers, extracting the fourth plurality of feature maps comprising obtaining a (2, l2+1)th plurality of feature maps where 1≤l2≤L2 by: generating a (2, l2)th plurality of filtered feature maps by applying a (2, l2)th plurality of filters on a (2, l2)th plurality of feature maps, a (2, 1)st plurality of feature maps comprising the second latent feature map; generating a (2, l2)th plurality of normalized feature maps by applying the instance normalization process on the (2, l2)th plurality of filtered feature maps; and generating the (2, l2+1)th plurality of feature maps by implementing a (2, l2)th non-linear activation function on each of the (2, l2)th plurality of normalized feature maps; and
generating the second plurality of feature maps by applying the padding process on each of the fourth plurality of feature maps.

18. The system of claim 16, wherein upsampling the third plurality of feature maps comprises extracting a jth training image of the plurality of training images from an output of a (3, L3)th convolutional layer of a third plurality of convolutional layers of the neural network where L3 is a number of the third plurality of convolutional layers, extracting the jth training image comprising obtaining a (3, l3+1)th plurality of feature maps where 1≤l3≤L3 by:

generating an l3th plurality of upsampled feature maps by implementing an upsampling process on a (3, l3)th plurality of feature maps, a (3, 1)st plurality of feature maps comprising the third plurality of feature maps;
generating a (3, l3)th plurality of filtered feature maps by applying a (3, l3)th plurality of filters on the l3th plurality of upsampled feature maps;
generating a (3, l3)th plurality of normalized feature maps by applying the instance normalization process on the (3, l3)th plurality of filtered feature maps; and
generating the (3, l3+1)th plurality of feature maps by implementing a (3, l3)th non-linear activation function on each of the (3, l3)th plurality of normalized feature maps.

19. The system of claim 15, wherein applying the neural network on the ith subset of the first plurality of CSI samples comprises:

obtaining a fifth plurality of feature maps by applying a fourth plurality of convolutional layers of the neural network on the ith subset of the first plurality of CSI samples; and
applying a plurality of sub-networks of the neural network on the fifth plurality of feature maps by applying a jth sub-network of the plurality of sub-networks on the fifth plurality of feature maps, the jth sub-network associated with a jth video device of the plurality of video devices where 1≤j≤N1 and N1 is a number of the plurality of video devices, applying the jth sub-network comprising: obtaining a sixth plurality of feature maps by feeding the fifth plurality of feature maps to an input of a second fully connected layer of the jth sub-network; obtaining a seventh plurality of feature maps by applying a second ResNet of the jth sub-network on the sixth plurality of feature maps; and upsampling the seventh plurality of feature maps.

20. The system of claim 19, wherein:

obtaining the sixth plurality of feature maps further comprises: extracting a fourth feature vector from an output of the second fully connected layer; generating a third latent feature map by generating a matrix from a plurality of elements in the fourth feature vector; obtaining a fourth latent feature map by applying a padding process on the third latent feature map; obtaining an eighth plurality of feature maps by applying a fifth plurality of convolutional layers of the jth sub-network on the fourth latent feature map by extracting the eighth plurality of feature maps from an output of a (5, L5)th convolutional layer of the fifth plurality of convolutional layers where L5 is a number of the fifth plurality of convolutional layers, extracting the eighth plurality of feature maps comprising obtaining a (5, l5+1)th plurality of feature maps where 1≤l5≤L5 by: generating a (5, l5)th plurality of filtered feature maps by applying a (5, l5)th plurality of filters on a (5, l5)th plurality of feature maps, a (5, 1)st plurality of feature maps comprising the fourth latent feature map; generating a (5, l5)th plurality of normalized feature maps by applying the instance normalization process on the (5, l5)th plurality of filtered feature maps; and generating the (5, l5+1)th plurality of feature maps by implementing a (5, l5)th non-linear activation function on each of the (5, l5)th plurality of normalized feature maps; and generating the sixth plurality of feature maps by applying the padding process on each of the eighth plurality of feature maps; and
upsampling the seventh plurality of feature maps comprises extracting a jth training image of the plurality of training images from an output of a (6, L6)th convolutional layer of a sixth plurality of convolutional layers of the jth sub-network where L6 is a number of the sixth plurality of convolutional layers, extracting the jth training image comprising obtaining a (6, l6+1)th plurality of feature maps where 1≤l6≤L6 by: generating an l6th plurality of upsampled feature maps by implementing an upsampling process on a (6, l6)th plurality of feature maps, a (6, 1)st plurality of feature maps comprising the seventh plurality of feature maps; generating a (6, l6)th plurality of filtered feature maps by applying a (6, l6)th plurality of filters on the l6th plurality of upsampled feature maps; generating a (6, l6)th plurality of normalized feature maps by applying the instance normalization process on the (6, l6)th plurality of filtered feature maps; and generating the (6, l6+1)th plurality of feature maps by implementing a (6, l6)th non-linear activation function on each of the (6, l6)th plurality of normalized feature maps.
Patent History
Publication number: 20220083798
Type: Application
Filed: Oct 25, 2021
Publication Date: Mar 17, 2022
Applicants: (Karaj), (Tehran), Amirkabir University of Technology (Tehran)
Inventors: Mohammad Hadi Kefayati (Karaj), Vahid Pourahmadi (Tehran)
Application Number: 17/510,321
Classifications
International Classification: G06K 9/00 (20060101); H04B 7/0456 (20060101); G06T 3/40 (20060101); H04L 25/02 (20060101); G06N 3/08 (20060101); G06N 3/04 (20060101);