Subspace-constrained partial update method for high-dimensional adaptive processing systems

A method is explained for any adaptive processor processing digital signals by adjusting signal weights on digital signal(s) it handles, to optimize adaptation criteria responsive to a functional purpose or externalities (transient, temporary, situational, and even permanent) of that processor. Adaptation criteria for the adaptive algorithm may be any combination of a signal or parameter estimation, and measured quality(ies). This method performs a linear transformation adapting parameters from M to (M1+L) dimensions in each adaptation event, such that M1 weights are updated without constraints and M0=M−M1 weights are forced by soft constraints into an L-dimensional subspace they spanned at the beginning of the adaptation period. The same dimensionality reduction, using the same linear transformation, is applied to the input data. The reduced-dimensionality weights are then adapted using the identical optimization strategy employed by the processor, except with input data that has also been reduced in dimensionality.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application for patent claims priority under 35 U.S.C. 119(e), particular § 119(e)(3), from U.S. provisional application filed for the invention described therein by the same inventor which was filed on Nov. 3, 2013 by Express Mail Certificate, Post Office to U.S. Patent and Trademark Office, EM Certificate # EQ 338677837 US, said provisional patent application titled “Subspace-constrained Partial Update Method for High-Dimensional Adaptive Processing Systems” for the same inventor Brian G. Agee of San Jose, Ca., which PPA was given application Ser. No. 61/962,269 with that filing date, title, and inventor; [BA1] and this application and Specification expressly references that original provisional application and incorporates all of that original provisional application's specification and drawings.

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

Not Applicable

FIELD OF THE INVENTION

The present invention relates generally to digital signal processing, with particular emphasis on devices employing one or more adaptive processors with large numbers of adaptation weights (also known as high-dimensional, or highly-adaptive signal processors). Although referred to generally as “an adaptive signal processor” or as “an adaptive digital signal processor” (‘digital’ being generally understood to refer to the nature of the processing used by the computational aspect, while the signals are generally understood to be analog electromagnetic waveforms), this phrase covers any singleton or combination device (i.e., whether the ‘processor’ comprises a single element, or a non-zero set of interacting elements), and whether the device's digital processing aspect is entirely embodied in physical hardware, or in a combined form of hardware, special-purpose firmware, and general processing purpose software. The term ‘adaptive’ refers to processing that adjusts signal weights to the physical signal(s) transmitted, received, or both, by or through said adaptive processor, in order to optimize an adaptation criteria responsive to a functional purpose or the externalities (transient, temporary, situational, and even permanent) for that processor. Each adaptation criteria for the adaptive algorithm may be any of a signal or parameter estimation, measured quality, or any combination thereof.

BACKGROUND OF THE INVENTION

Highly dimensional adaptive processors (devices employing adaptive processors with large numbers of adaptation weights or parameters) are of interest for a wide variety of applications. These applications include:

    • Acoustic echo cancellers, where adaptive noise cancellers employing finite impulse response (FIR) filters with as many as 2,000 adaptively adjust filter taps are used to remove echoes induced in long-haul telephony networks.
    • Phased array and MIMO radar systems, where large arrays of antennas (10-1,000 elements/array) are used to electronically steer beams at detected targets and nulls at jammers and clutter sources, by combining signals received by the array and distributing signals transmitting to the array using large linear matrix operations.
    • Digital predistortion (DPD) processors, where nonlinear adaptive processors with large numbers of parameters (e.g., Volterra-series approximations of nonlinear processes) are used to adaptively learn, and digitally invert nonlinear effects added by high-power amplifiers.
    • Smart Grid networks employing spread spectrum modulation formats with large spreading factors and adaptive dispreading methods to separate large numbers of co-channel signals, and to detect and remove spoofers from the networks.
    • Massively MIMO cellular networks employing base stations with very large numbers of antenna arrays.

To effect adaptive signal processing in these applications, practical means for adjusting large numbers of weights must be developed and implemented. Techniques that have been developed in past to accomplish this include nonblind techniques that exploit a known reference signal (e.g., a training or pilot signal inserted into a signal transmitted to the adaptive processor); “partially blind” techniques that exploit a known reference signal with unknown effects added by the communication channel, e.g., delay caused by clock timing offset and physical distance between the transmitter and receiver, and carrier offset caused by LO offset and Doppler shift between the transmitter and receiver; and fully blind methods that only exploit general structure of the transmitted signal. In many systems, a reference signal can only be made available on a sparse basis, e.g., at the beginning of signal reception, after which the processor must operate using fixed weights without additional training between reference signal reception intervals.

These techniques can also be subdivided into methods with “order-M” (O(M)) or linear complexity, where the real multiply-and-accumulate (RMAC) operations per input data sample needed to adapt the processor is on the order of the number of weights M being adjusted by the processor, and methods with higher-order (e.g., O(Mν), where ν>1) complexity, where the RMAC's per data sample needed to adapt the processor rises much faster than the number of weights being adjusted by the processor. Typically, the most powerful and effective adaptive processing methods have complexity of high order. This presents significant challenges in applications where the number of adaptation weights M is very large.

Lastly, these techniques can be subdivided into sample-processing methods, where the processor weights are adapted every time a new input data sample is provided to the processor, and block-processing methods, where a block of input data is received and used to adapt the processor. In some cases, the algorithm may circulate through the data block multiple times before moving onto the next processing block. Again, the more powerful and effective adaptive processing methods employ block processing, typically with a block size N that is (in many cases, must be) a large multiple of M. However, the cost of this processing is reduced update rate; reduced response time to changes in channel effects affecting the adaptive processor; and (e.g., for multiple passes through the data block) additional increase in complexity.

It should also be noted that the operations referred to above are the “adapt-path” operations used to train the adaptive processor, not the “data-path” operations used to implement the adaptive processor during and after training. Adapt-path operations are used to tune the adaptive processor used to process a set of signals, while data-path operations are used to process a set of signals during and after tuning. For most of the applications described above (the DPD application being a notable exception), the data-path operations have O(M) complexity, regardless of the complexity of the adapt path.

To address the adapt-path complexity issue in particular, the concept of a partial update (PU) method (PUM; in the plural, PUMs) that only updates a subset of M1 weights during each adaptation block or sample (referred to hereafter as a block with size N=1) has been proposed for a number of applications. All PUMs developed to date can be interpreted as linearly-constrained optimization techniques, in which the original method is adjusted by applying a hard linear constraint that forces M0=M−M1 weights to remain at the same value between adaptation blocks or samples. The subset of weights actually adapted during each data block, or during each of several passes through a data block, are changed during each adaptation event, so that every weight is updated over the course of multiple adaptation events.

This approach has substantive limitations in practice. First, the linear constraint, by its nature, can induce severe misadjustment from the optimal solution sought by the processor. This can manifest as either or both a convergent or steady-state bias from the optimal solution, and a “jitter” or fluctuation about that steady-state solution. In some applications, e.g., phased array radar applications where the received radar waveform must be extracted from strong clutter and jamming, this can cause the system to fail entirely (studies of PUMs showing “convergence-in-mean” to optimal solutions are almost always conducted under assumptions of little-or-no noise and removable multipath distortion). Even if the processor signal of interest is received at high signal-to-interference-and-noise ratio (SINR), this can lead to well-known “hypersensitivity” issues which degrades the system performance from the optimal solution.

Second, the linear optimization constraint can only be easily added to a small subset of O(M2) optimization functions, e.g., “least-squares (LS)” or LS-like methods that can be formulated as a quadratic optimization problem, or O(M) “least-mean-squares (LMS)” or LMS-like methods that are either intended to approximate LS optimization algorithms (e.g., by replacing gradients with “stochastic gradient” approximations), or that can themselves be formulated as linearly constrained quadratic optimization problems (e.g., “normalized LMS (NLMS)” and “Affine Projections” algorithms). In many cases, adherence to the constraint significantly increases complexity of the original method, and approximations, e.g., using Lagrange multipliers in which the multiplier itself is added to the algorithm, only increases the misadjustment of the algorithm.

In summary, the current PUMs developed to data can only be used with a small number of O(M2) methods, and cannot be used with any O(Mν) methods where ν>2. This is particularly unfortunate, because the PUM should have its strongest utility with these classes of methods. This is especially evident when the complexity of the data-path processing, which as noted above is typically O(M), is added to the adapt-path processing: at best for O(M) adapt-path methods, the PUM will only reduce overall complexity by 50%. This is the background in which the present invention takes form.

SUMMARY OF THE INVENTION

The present invention is a method for implementing partial-update methods (PUMs) in any adaptive processor that adjusts weights to optimize an adaptation criterion in a signal estimation or parameter estimation algorithm. In the preferred embodiment, the method does this by performing a linear transformation of the processor parameters being adapted from M-dimensions to (M1+L)-dimensions in each adaptation event, where M1 weights are updated without constraints, and M0=M−M1 weights are subjected to L soft constraints that forces them into an L-dimensional subspace spanned by those weights (preferentially, those weights are a scaled replica of the original weights) at the beginning of the adaptation event, and where M1 and L are much smaller than M (M1<<M and L<<M). Preferentially, L is equal to unity (L=1), i.e., the M0 constrained weights are forced into a single-dimensional subspace spanned by those weights.

The same dimensionality reduction is also applied to the input data, using the same linear transformation. The reduced-dimensionality weights are then adapted using exactly the same optimization strategy employed by the adaptive processor, except with input data that has also been reduced in dimensionality.

BRIEF DESCRIPTION OF THE DRAWINGS

The present invention is illustrated in the attached drawings as described herein.

FIG. 1 is a view of an optimization approach used in a system employing a prior-art nonblind single-port unconstrained adaptation algorithm.

FIG. 2 is a view of an optimization approach used in a system employing a prior-art nonblind single-port partial-update (PU) adaptation algorithm.

FIG. 3 is a view of an optimization approach used in a system employing a nonblind single-port subspace-constrained partial update (SCPU) adaptation algorithm.

FIG. 4 is a view of a nonblind single-port SCPU adapt-path weight update procedure, depicting use of projection matrices (implemented using simple multiplexing and demultiplexing operations) to separate data and weights into unconstrained and subspace-constrained components, allowing use of an unconstrained single-port weight adaptation algorithm of arbitrary type and structure after the subspace separation procedure.

FIG. 5 is a view of a nonblind uncoupled multiport SCPU adapt-path weight update procedure, depicting use of projection matrices (implemented using simple multiplexing and demultiplexing operations) to separate data and weights into unconstrained and subspace-constrained components, allowing use of parallel banks of reduced-complexity unconstrained single-port weight adaptation algorithms of arbitrary type and structure after the subspace separation procedure.

FIG. 6 is a view of a nonblind fully-coupled multiport SCPU adapt-path weight update procedure, depicting use of projection matrices (implemented using simple multiplexing and demultiplexing operations) to separate data and weights into unconstrained and subspace-constrained components, allowing use of an unconstrained multiport weight adaptation algorithm of arbitrary type and structure after the subspace separation procedure.

DETAILED DESCRIPTION OF THE DRAWINGS

FIG. 1 is a view of an optimization approach used in a system employing a prior-art nonblind single-port unconstrained adaptation algorithm. On a first path, a vector processor [1] provides a sequence of data vectors x(nsym)=[x1(nsym) . . . xM(nsym)]T, each data vector having dimension M×1, where nsym is a symbol index, and where M is a real positive integer, referred to here as the degrees-of-freedom (DoF's) of the system, and (⋅)T denotes the matrix transpose operation. As part of a data-path processing procedure, the data vector sequence x(nsym) is then passed through a linear combiner [3] that performs a matrix multiplication of x(nsym) by a weight vector w=[w1 . . . wM]T having dimension M×1, resulting in an output data scalar y(nsym)=xT(nsym)w having dimension N×1.

As part of an adapt-path processing procedure that is a focus of the invention, the data vector sequence x(nsym) is also passed into a bank of M 1:N serial-to-parallel (S/P) convertors [2] that converts the vector data sequence into a sequence of data matrices X(n)=[x(nN+1) . . . x(nN+N)]T, each data matrix having dimension N×M, where N is a real-positive integer, referred to here as the block length of the adaptation algorithm, and n is an adapt block index.

On a second path, and also as part of an adapt-path processing procedure, a sequence of reference scalars s(nsym) is provided by a reference generator [4], each reference scalar having dimension 1×1. In the nonblind adaptation algorithm shown in FIG. 1, the reference scalars s(nsym) are known at the receiver, and are correlated with some component of the data vector x(nsym) in some known manner; however, in other system implementations, the reference scalars may be members of a set of possible known received signal components, or may be derived from the output data vector in some manner.

The reference scalars are then passed into a single 1:N serial-to-parallel (S/P) convertor [5] that converts the scalar symbol sequence into a sequence of reference vectors s(n), each vector reference data symbol having dimension N×1. The reference vector s(n) is then compared with the data matrix X(n) (from the bank of M 1:N serial-to-parallel converters [2]) over each adapt block, and used to generate a weight vector w using an unconstrained adaptation algorithm [6] that adjusts every element of w to optimize a metric of similarity between the output data vector y(n)=X(n)w and the reference vector s(n), e.g., the sum-of-squares error metric F(w;n)=∥s(n)−X(n)w∥22, where ∥⋅∥2 denotes the L2 vector norm. The weights are then passed to the data-path processor [3], where they are used to process the input data vectors on a symbol-by-symbol basis.

It should be noted that the data matrices and reference vectors do not need to be contiguous, internally or between adapt blocks on the adapt-paths. However, the input data matrices and reference vectors should have internally consistent symbol indices.

FIG. 2 is a view of an optimization approach used in a system employing a prior-art nonblind single-port partial-update (PU) adaptation algorithm. On a first path, a vector processor [1] provides a sequence of data vectors x(nsym)=[x1(nsym) . . . xM(nsym)]T, each data vector having dimension M×1, where nsym is a symbol index, and where M is a real positive integer, referred to here as the degrees-of-freedom (DoF's) of the system, and (⋅)T denotes the matrix transpose operation. As part of a data-path processing procedure, the data vector sequence x(nsym) is then passed through a linear combiner [3] that performs a matrix multiplication of x(nsym) by a weight vector w=[w1 . . . wM]T having dimension M×1, resulting in an output data scalar y(nsym)=xT(nsym)w having dimension N×1.

As part of an adapt-path processing procedure that is a focus of the invention, the data vector sequence x(nsym) is also passed into a bank of M 1:N serial-to-parallel (S/P) convertors [2] that converts the vector data sequence into a sequence of data matrices X(n)=[x(nN+1) . . . x(nN+N)]T, each data matrix having dimension N×M, where N is a real-positive integer, referred to here as the block length of the adaptation algorithm, and n is an adapt block index.

On a second path, and also as part of an adapt-path processing procedure, a sequence of reference scalars s(nsym) is provided by a reference generator [4], each reference scalar having dimension 1×1. In the nonblind adaptation algorithm shown in FIG. 2, the reference scalars s(nsym) are known at the receiver, and are correlated with some component of the data vector x(nsym) in some known manner; however, in other system implementations, the reference scalars may be members of a set of possible known received signal components, or may be derived from the output data vector in some manner.

The reference scalars are then passed into a single 1:N serial-to-parallel (S/P) convertor [5] that converts the scalar symbol sequence into a sequence of reference vectors s(n), each vector reference data symbol having dimension N×1.

On a third path, and also as part of an adapt-path processing procedure, an update-set selection algorithm [7] is used to generate a sequence of M1-element update-sets 1(n)={m∈{1, . . . , M}: m(1;n), . . . , m(M1;n)} and complementary M0-element held-sets 0(n)={m∈{1, . . . , M}: m∉1(n)} over each adapt block, such that M0=M−M1, 0(n)∪1(n)={1, . . . , M}, and 0(n)∩1(n)={ } within adapt block n. The set selection strategy can be adjusted using deterministic, random, pseudo-random, or data-derived methods. In the partial-update optimization approach shown in FIG. 2, the update-set held-set {1(n),0(n)} are further used to generate update-set and held-set projection matrices [9] {M1(n),M0(n)}, where Ml(n)=[eM(ml)]ml l(n) for l=0, 1, and where eM(ml)=[δ(m−ml)]m=1M is the mlth M×1 Euclidean basis vector and δ(k) is the Kronecker delta function.

The reference vector s(n) is then compared with the data matrix X(n) over each adapt block, and used to generate a weight vector w using a hard-constrained adaptation algorithm [8] that adjusts only the elements of w in the update-set, i.e., (w)m∈ 1(n), to optimize a metric of similarity between the output data vector y(n)=X(n)w and the reference vector s(n), e.g., the sum-of-squares error metric F(w;n)=∥s(n)−X(n)w∥22, while holding the elements of w in the held-set, i.e., (w)m∈ 0(n), equal to the same values held by those weight elements over the previous adapt block. This can be expressed as optimization of existing weight vector w to form new weight vector w′, subject to hard linear constraint M0T(n)w′=M0T(n)w. The weights are then passed to the data-path processor [3], where they are used to process the input data vectors on a symbol-by-symbol basis.

It should be noted that the data matrices and reference vectors do not need to be contiguous, internally or between adapt blocks on the adapt-paths. However, the input data matrices and reference vectors should have internally consistent symbol indices.

FIG. 3 is a view of an optimization approach used in a system employing a new nonblind single-port subspace-constrained partial-update (SCPU) adaptation algorithm. On a first path, a vector processor [1] provides a sequence of data vectors x(nsym)=[x1(nsym) . . . xM(nsym)]T, each data vector having dimension M×1, where nsym is a symbol index, and where M is a real positive integer, referred to here as the degrees-of-freedom (DoF's) of the system, and (⋅)T denotes the matrix transpose operation. As part of a data-path processing procedure, the data vector sequence x(nsym) is then passed through a linear combiner [3] that performs a matrix multiplication of x(nsym) by a weight vector w=[w1 . . . wM]T having dimension M×1, resulting in an output data scalar y(nsym)=xT(nsym)w having dimension N×1.

As part of an adapt-path processing procedure that is a focus of the invention, the data vector sequence x(nsym) is also passed into a bank of M 1:N serial-to-parallel (S/P) convertors [2] that converts the vector data sequence into a sequence of data matrices X(n)=[x(nN+1) . . . x(nN+N)]T, each data matrix having dimension N×M, where N is a real-positive integer, referred to here as the block length of the adaptation algorithm, and n is an adapt block index.

On a second path, a sequence of reference scalars s(nsym) is provided by a reference generator [4], each reference scalar having dimension 1×1. In the nonblind adaptation algorithm shown in FIG. 3, the reference scalars s(nsym) are known at the receiver, and are correlated with some component of the data vector x(nsym) in some known manner; however, in other system implementations, the reference scalars may be members of a set of possible known received signal components, or may be derived from the output data vector in some manner.

The reference scalars are then passed into a single 1:N serial-to-parallel (S/P) convertor [5] that converts the scalar symbol sequence into a sequence of reference vectors s(n), each vector reference data symbol having dimension N×1.

On a third path, an update-set selection algorithm [7] is used to generate a sequence of M1-element update-sets 1(n)={m∈{1, . . . , M}: m(1;n), . . . , m(M1;n)} and complementary M0-element held-sets 0(n)={m∈{1, . . . , M}: m∉1(n)} over each adapt block, such that M0=M−M1, 0(n)∪1(n)={1, . . . , M}, and 0(n)∩1(n)={ } within adapt block n. The set selection strategy can be adjusted using deterministic, random, pseudo-random, or data-derived methods. In the partial-update optimization approach shown in FIG. 3, the update-set and held-set {1(n),0(n)} are further used to generate update-set and held-set projection matrices [9] {M1(n),M0(n)}, where Ml(n)=[eM(ml)]ml l(n) for l=0, 1, and where eM(ml)=[δ(m−ml)]m=1M is the mlth M×1 Euclidean basis vector and δ(k) is the Kronecker delta function.

The reference vector s(n) is then compared with the data matrix X(n) over each adapt block, and used to generate a weight vector w using a subspace-constrained adaptation algorithm [10] that adjusts the elements of w in the update-set, i.e., (w)m∈ 1(n), to optimize a metric of similarity between the output data vector y(n)=X(n)w and the reference vector s(n), e.g., the sum-of-squares error metric F(w;n)=∥s(n)−X(n)w∥22, while optimizing the elements of w in the held-set, i.e., (w)m∈ 0(n), to a scalar multiple of the values held by those weight elements over the previous adapt block. This can be expressed as optimization of existing weight vector w to form new weight vector w′, subject to subspace constraint M0T(n)w′=g0M0T(n)w, where g0 is an unknown scalar that is also optimized by the algorithm. The weights are then passed to the data-path processor [3], where they are used to process the input data vectors on a symbol-by-symbol basis.

It should be noted that the data matrices and reference vectors do not need to be contiguous, internally or between adapt blocks on the adapt-paths. However, the input data matrices and reference vectors should have internally consistent symbol indices.

FIG. 4 is a view of a nonblind, single-port, SCPU adapt-path weight update method, depicting use of projection matrices (implemented using simple multiplexing and demultiplexing operations) to separate data and weights into unconstrained and subspace-constrained components, allowing use of an unconstrained single-port weight adaptation algorithm of arbitrary type and structure after the subspace separation procedure. Over adapt block n, the N×M data matrix X(n) provided by the 1:N S/P bank [2] (not shown) is separated into an N×M1 dimensional update-set data matrix X1(n)=X(n)M1(n) and an N×M0 dimensional held-set data matrix X0(n)=X(n)M0(n), using a columnar matrix demultiplexer (DMX) [11] and the update-set and held-set projection matrices provided over adapt block n [7].

An M×M0 dimensional held-set projection matrix M0(n) is additionally used to extract the M0×1 dimensional held-set combiner weights w0=M0T(n)w from the M×1 dimensional combiner weights w stored in current memory [12], e.g., computed in prior adapt blocks, using a held-set weight extractor [13]. These held-set combiner weights w0 are used to multiply the held-set data matrix X0(n) from the columnar matrix demultiplexer (DMX) [11] through a linear combiner [14], yielding a N×1 held-set output data vector y0(n)=X0(n)w0. X1(n) and y0(n) are then combined into an N×(M1+1) enhanced data matrix {tilde over (X)}(n)=[X1(n) y0(n)] using a column-wise multiplexing (MUX) operation [15].

The enhanced data matrix {tilde over (X)}(n) is then input to an unconstrained weight adaptation algorithm [16] that adjusts every element of an (M1+1)×1 enhanced combiner vector

w ~ = ( w 1 g 0 )
to optimize a metric of similarity between an N×1 reference vector s(n) provided by a reference generator [4] (not shown) and an N×1 output data vector y(n)={tilde over (X)}(n){tilde over (w)} that would be provided by an (M1+1)-element linear combining operation (not shown). The unconstrained weight adaptation algorithm [16] optimizes the same metric as the unconstrained weight adaptation algorithm [6] depicted in prior art FIG. 1, e.g., the sum-of-squares error metric F(w;n)=∥s(n)−{tilde over (X)}(n){tilde over (w)}∥22. However, the complexity of the unconstrained weight adaptation algorithm is O((M1+1)ν) in [16], rather than O(Mν) in [6], where ν is the complexity order of the algorithm, e.g., ν=2 if a sum-of-squares metric is used in both Figures.

The updated (M1+1)×1 enhanced combiner vector {tilde over (w)} is then demultiplexed (DMX'd) [17] into an updated M1×1 update-set weight vector w1 comprising the first M1 elements of {tilde over (w)}, and a new held-set scalar multiplier g0 comprising the last element of {tilde over (w)}. The held-set scalar multiplier g0 is then multiplied by the current M0×1 held-set weights w0 [18] to form updated held-set weights w0←w0g0, and multiplexed (MUX'd) [19] with the updated M1×1 update-set weight vector w1, in accordance with the current update-set selection algorithm [7], to form updated M×1 dimensional weight vector w=M1(n)w1+M0(n)w0. This weight vector is then stored in memory [12], allowing its use as an initial combiner weight vector in a subsequent adapt block. The weight vector can also be used in the data-path linear combiner (not shown) for parallel or subsequent data-path processing operations used in the overall system.

FIG. 5 is a view of a nonblind, multiport, uncoupled SCPU adapt-path weight update method, depicting use of projection matrices (implementing using simple multiplexing and demultiplexing operations) to separate data and weights into unconstrained and subspace-constrained components, allowing use of parallel banks of reduced-complexity and unconstrained, single-port, weight adaptation algorithms of arbitrary type and structure after the subspace separation step. Over adapt block n, the N×M data matrix X(n) provided by the 1:N S/P bank [2] (not shown) is separated into an N×M1 dimensional update-set data matrix X1(n)=X(n)M1(n) and an N×M0 dimensional held-set data matrix X0(n)=X(n)M0(n), using a columnar matrix demultiplexer (DMX) [11] and the update-set and held-set projection matrices provided over adapt block n from the update-set selection algorithm [7]. On each output port p, where p=1, . . . , P where P is the number of weight ports adapted by the overall algorithm, the M×M0 dimensional held-set projection matrix M0(n) is additionally used to extract the M0×1 dimensional port p held-set combiner weights w0(p)=M0T(n)w(p) from the M×1 dimensional port p combiner weight vector w(p) stored in current memory [20], e.g., computed over a prior adapt block, using a held-set weight extractor [13]. The held-set data matrix X0(n) is then multiplied by the port p held-set combiner weights w0(p) from the held-set weight extractor [13] through a linear combiner [14], yielding N×1 w0(p) held-set output data vector y0(n;p)=X0(n)w0(p) for each port p. X1(n) and y0(n;p) are then combined into an N×(M1+1) enhanced port p data matrix {tilde over (X)}(n;p)=[X1(n) y0(n;p)] using a column-wise multiplexing (MUX) operation [15]. The port p enhanced data matrix is then input to an unconstrained weight adaptation algorithm [21] that adjusts every element of an (M1+1)×1 port p enhanced combiner vector {tilde over (w)}(p)=

( w 1 ( p ) g 0 ( p ) )
to optimize a metric of similarity between an N×1 port p reference vector s(n;p) provided by a reference generator [4] (not shown) and an N×1 output data vector y(n;p)={tilde over (X)}(n;p){tilde over (w)}(p) that would be provided by an (M1+1)-element linear combining operation (not shown). The unconstrained weight adaptation algorithm [21] optimizes the same metric as the unconstrained weight adaptation algorithm [6] depicted in prior art FIG. 1, e.g., the sum-of-squares error metric F(w;n,p)=∥s(n;p)−{tilde over (X)}(n;p){tilde over (w)}∥22. However, the complexity of the unconstrained weight adaptation algorithm is O((M1+1)ν) in [21], rather than O(Mν) in [6], where ν is the complexity order of the algorithm, e.g., ν=2 if a sum-of-squares metric is used in both Figures. Additionally, the weight adaptation algorithm can exploit commonality between the enhanced data matrices {{tilde over (X)}(n;p)}p=1P, i.e., the common update-set data matrix X1(n) contained within each enhanced data matrix, to share results of operations on X1(n) performed for each algorithm port, resulting in an additional reduction on computational complexity for the overall multiport processor.

The updated (M1+1)×1 port p enhanced combiner vector {tilde over (w)}(p) is then demultiplexed (DMX'd) [17] into an updated M1×1 port p update-set weight vector w1(p) comprising the first M1 elements of {tilde over (w)}(p), and a new port p held-set scalar multiplier g0(p) comprising the last element of {tilde over (w)}(p). The held-set scalar multiplier g0(p) is then multiplied by the current (from the held-set weight extractor [13]) M0×1 port p held-set weights w0(p) [18] to form updated port p held-set weights w0(p)←w0(p)g0(p), and multiplexed (MUX'd) [19] with the updated M1×1 port p update-set weight vector w1(p), in accordance with the current update-set selection algorithm [7], to form updated M×1 dimensional port p weight vector w(p)=M1(n)w1(p)+M0(n)w0(p). This weight vector is then stored in memory [20], allowing its use as an initial combiner weight vector in a subsequent adapt block. The weight vector can also be used in a port p data-path linear combiner (not shown) for parallel or subsequent data-path processing operations used in the overall system.

FIG. 6 is a view of a nonblind, multiport, fully-coupled SCPU adapt path weight update method, depicting use of projection matrices (implemented using simple multiplexing and demultiplexing operations) to separate data and weights into unconstrained and subspace-constrained components, allowing use of an unconstrained multiport weight adaptation algorithm of arbitrary type and structure after the subspace separation procedure. Over adapt block n, the N×M data matrix X(n) provided by the 1:N S/P bank [2] (not shown) is separated into an N×M1 dimensional update-set data matrix X1(n)=X(n)M1(n) and an N×M0 dimensional held-set data matrix X0(n)=X(n)M0(n), using a columnar matrix demultiplexer (DMX) [11] and the update-set and held-set projection matrices provided over adapt block n from the update-set selection algorithm [7]. The M×M0 dimensional held-set projection matrix M0(n) is additionally used to extract the M0×P dimensional held-set combiner weights W0=M0T(n)W from the M×P dimensional combiner weight matrix W stored in current memory [20], e.g., computed over a prior adapt block, using a held-set multiport weight extractor [22]. The held-set data matrix X0(n) is then multiplied by the M0×P held-set multiport combiner weights W0 through a linear combiner [23], yielding N×P multiport held-set output data matrix Y0 (n)=X0(n)W0. X1(n) and Y0(n) are then combined into an N×(M1+P) dimensional enhanced data matrix {tilde over (X)}(n)=[X1(n) Y0(n)] using a column-wise multiplexing (MUX) operation [24].

The enhanced data matrix is then input to an unconstrained multiport weight adaptation algorithm [25] that adjusts every element of (M1+P)×P enhanced multiport combiner matrix {tilde over (W)}(p)=

( W 1 G 0 )
to optimize a metric of similarity between an N×P reference vector S(n) provided by a multiport reference generator (not shown) and an N×P output data matrix Y(n)={tilde over (X)}(n){tilde over (W)} that would be provided by an (M1+P)×P element linear combining operation (not shown), e.g., the sum-of-squares error metric F({tilde over (W)};n)=∥S(n)−{tilde over (X)}(n){tilde over (W)}∥2, where ∥⋅∥ denotes the Frobenius matrix norm. However, the complexity of the unconstrained weight adaptation algorithm is O(P(M1+1)ν) in [25], where ν is the complexity order of the algorithm, e.g., ν=2 if a sum-of-squares metric is used to optimize {tilde over (W)}.

The updated (M1+P)×P dimensional enhanced combiner matrix {tilde over (W)} is then demultiplexed (DMX'd) [26] into an updated M1×P dimensional update-set weight matrix W1 comprising the first M1 rows of {tilde over (W)}, and a new P×P dimensional held-set multiplier matrix G0 comprising the last P rows of {tilde over (W)}. The held-set multiplier matrix G0 is then multiplied by the current M0×P held-set weights W0 [27] to form updated held-set weights W0←W0G0, and multiplexed (MUX'd) [28] with the updated M1×P update-set weight matrix W1, in accordance with the current update-set selection algorithm [7], to form updated M×P dimensional weight matrix W=M1(n)W+M0(n)W0. This weight matrix is then stored in memory [20], allowing its use as an initial combiner weight matrix in a subsequent adapt block. The weight matrix can also be used in a multiport data-path linear combiner (not shown) for parallel or subsequent data-path processing operations used in the overall system.

DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS

A method for processing digital signals by any adaptive processor (as a single element or set of interacting elements, and whether entirely embodied in physical hardware or in a combined form of hardware, special-purpose firmware, and general processing purpose software applied to effect digital signal processing) that adjusts signal weights on the digital signal(s) transmitted, received, or both, by or through said adaptive processor, in order to optimize an adaptation criteria responsive to a functional purpose or the externalities (transient, temporary, situational, and even permanent) for that processor, is explained. This adaptation criteria for the adaptive algorithm may be any of a signal or parameter estimation, measured quality, or any combination thereof.

This method performs a linear transformation of the processor parameters being adapted from M-dimensions to (M1+L)-dimensions in each adaptation event, where M1<<M and L<<M such that M1 weights are updated without constraints and M0=M−M1 weights are subjected to a soft constraints that forces them into an L-dimensional subspace spanned by those weights at the beginning of the adaptation period. The same dimensionality reduction, using the same linear transformation, is applied to the input data. The reduced-dimensionality weights are then adapted using the same optimization strategy employed by the adaptive processor, except with input data that has also been reduced in dimensionality. In a preferred embodiment the reduced-dimensionality weights are then adapted using exactly the same optimization strategy. In alternative embodiment, as when there exists any of hardware, software, or combined hardware and software differentiation in “adapt-path” operations used to tune the adaptive processor and “data-path” operations used by the adaptive processor during and after tuning, the method will be adapting the reduced-dimensionality weights using substantively the same optimization strategy employed by the adaptive processor for the input data to which the same dimensionality reduction has been applied.

The invention has numerous advantages over the conventional PU approach. These include:

    • Substantive reduction or elimination of misadjustment effects induced by the hard linear constraint employed in the conventional PU method.
    • Applicability to any optimization function, including functions based on optimal (maximum-likelihood, maximum a priori, minimum-mean-square) estimation strategies, and methods such as analytic constant modulus algorithm (ACMA) and cumulant based techniques that have very high-order complexity.
    • Ability to reduce adapt block size N significantly, e.g., to N<M, even when the unconstrained approach experiences instability issues at that block size.
    • Ability to develop optimization quality measures, e.g., Cramer-Rao bound on parameter or signal estimation performance, that also exploits the dimensionality reduction, and that can track the performance degradation (relative to the unconstrained solution) induced by the partial update.
    • Ability to operate with much lower update set sizes than conventional PU, resulting in further reduction in complexity, and therefore cost, of adapt-path processing.
    • Ability to be implemented in highly distributed processing architectures, e.g., general-purpose graphical processing units (GPGPU's), that can further exploit the reduced complexity of the approach, or allow processing over multiple parallel update sets with minimal intercommunication between units.
    • Applicability to other problems where dimensionality is a known limitation, e.g., pattern recognition over feature sets with large numbers of parameters.

The approach can be used with any update-set selection strategy developed to date, or with new methods exploiting quality measurement advantages of the approach.

The invention is motivated by interpreting prior-art partial-update approaches as hard-constrained optimization algorithms, in which a complex combiner weight vector w having dimension M×1 is updated to optimize metric F(w;n) over adapt block n, i.e.,

w arg opt w C M F ( w ; n ) , ( Eq 1 )
subject to additional linear constraint
(w′)m∈ 0(n)=(w)m∈ 0(n),  (Eq2)
where 0(n)={m∈{1, . . . , M}: m(1;n), . . . , m(M0;n)}, referred to here as the block n held-set, is a set of M0<M indices of weights held constant over adapt block n, and where w in (Eq2) is the combiner weights at the beginning of the adapt block. This resultant constrained optimization criterion can be written in compact matrix algebra as

w arg opt w C M { F ( w ; n ) M 0 ( n ) w = M 0 ( n ) w } ( Eq 3 )
where M0(n)=[eM(m0)]m0 0(n) is the M×M0 sparse held-set projection matrix, and where eM(m0)=[δ(m−m0)]m=1M is the m0th M×1 Euclidean basis vector and δ(k) is the Kronecker delta function. Example prior-art partial-update algorithms that can be expressed in this manner include:

    • The partial-update normalized least-mean-squares (PU-NLMS) algorithm, which modifies the normalized least-mean-squares (NLMS) algorithm taught in [Nagumo67]

y ( n ) = x T ( n ) w , ( Eq 4 ) w w + μ x * ( n ) x ( n ) 2 2 ( s ( n ) - y ( n ) ) , 0 < μ 1 ( Eq 5 ) = arg min w C M w - w 2 2 x T ( n ) w = s ^ ( n ) , ( Eq 6 ) s ^ ( n ) = μ s ( n ) + ( 1 - μ ) y ( n ) , 0 < μ 1 ( Eq 7 )

    •  at adapt symbol index n, where x(n) is an M×1 data vector defined over adapt symbol index n, s(n) is a reference scalar known over adapt symbol index nsym, μ is the NLMS adaptive stepsize, and ∥⋅∥2 is the L-2 norm, and where (⋅)T and (⋅)* denote the matrix transpose and complex conjugation operations, respectively. Addition of the held-set weight-update constraint
      M0(n)w′=M0(n)w,  (Eq8)
    •  to (Eq6) yields the PU-NLMS algorithm taught in [Douglas94,Schertler98,Dogancay01],

y ( n ) = x T ( n ) w , ( Eq 9 ) w = M T ( n ) w , = 0 , 1 ( Eq 10 ) x = M T ( n ) x ( n ) , = 0 , 1 ( Eq 11 ) w 1 w 1 + μ x 1 * ( n ) x ( n ) 2 2 ( s ( n ) - y ( n ) ) , 0 < μ 1 , ( Eq 12 ) w M 1 ( n ) w 1 + M 0 ( n ) w 0 ( Eq 13 )

    •  where M1(n)=[eM(m1)]m1 1(n) is the M×M1 update-set projection matrix defined over adapt symbol n, and where 1(n)={m∈{1, . . . , M}: m∉0(n)} is the complementary update-set defined over adapt symbol n.
    • The partial-update affine projections (PU-AP) algorithm, which modifies the affine projection algorithm taught in [Ozeki84,Gay93]

y ( n ) = X ( n ) w , ( Eq 14 ) w w + μ X ( n ) ( s ( n ) - y ( n ) ) , 0 < μ 1 , ( Eq 15 ) = arg min w C M w - w 2 2 X ( n ) w = s ^ ( n ) , ( Eq 16 ) s ^ ( n ) = μ s ( n ) + ( 1 - μ ) y ( n ) , 0 < μ 1 , ( Eq 17 )

    •  over N-symbol adapt block n, 1≤N≤M, where X(n)=[x(nN+1) . . . x(nN+N)]T is an N×M data matrix defined over adapt block n, s(n)=[s(nN+1) . . . s(nN+N)]T is a known N×1 reference vector defined over adapt block n, μ is an adaptive stepsize, and (⋅) denotes the matrix pseudoinverse operation, given by
      X(n)=XH(n)(X(n)XH(n))−1,  (Eq18)
    •  for rank{X(n)}=N≤M, and where (⋅)H and (⋅)−1 denote the matrix conjugate-transpose (Hermitian) and inverse operations, respectively. Addition of constraint (Eq8) to (Eq16) yields the PU-AP taught in [Naylor04],
      y(n)=X(n)w,  (Eq19)
      wl=MlT(n)w, l=0,1,  (Eq20)
      Xl(n)=X(n)Ml(n), l=0,1,  (Eq21)
      w1←w1+μX1(n)(s(n)−y(n)), 0<μ≤1,  (Eq22)
      w←M1(n)w1+M0(n)w0.  (Eq23)
    • The partial-update block least-squares (PU-BLS) algorithm, which modifies the block least-squares (BLS) algorithm given by

y ( n ) = X ( n ) w , ( Eq 24 ) w ( 1 - μ ) w + μ X ( n ) s ( n ) , ( Eq 25 ) = arg min w C M s ^ ( n ) - X ( n ) w 2 2 , { s ^ ( n ) = μ s ^ ( n ) + ( 1 - μ ) y ( n ) y ( n ) = X ( n ) w ( Eq 26 )

    •  over N-symbol adapt block n, N≥M, where ∥⋅∥2 is the L-2 norm and μ is the AP adaptive stepsize, and where X(n)=[x(nN+1) . . . x(nN+N)]T is an N×M data matrix defined over adapt block n, s(n)=[s(nN+1) . . . s(nN+N)]T is a known N×1 reference vector defined over adapt block n, and (⋅) denotes the matrix pseudoinverse operation, given by
      X(n)=(XH(n)X(n))−1XH(n)  (Eq27)
    •  for rank{X(n)}=M≤N. Addition of constraint (Eq8) to (Eq26) yields PU-BLS algorithm
      y(n)=X(n)w,  (Eq28)
      wl=MlT(n)w, l=0,1,  (Eq29)
      Xl(n)=X(n)Ml(n), l=0,1,  (Eq30)
      w1←(1−μ)w1+μX1(n)(s(n)−y(n)), 0<μ≤1,  (Eq31)
      w←M1(n)w1+M0(n)w0.  (Eq32)

The BLS and PU-BLS algorithms can be interpreted as extensions of the AP and PU-AP algorithms to adapt block sizes N≥M. Similarly, the NLMS and PU-NLMS algorithms can be interpreted as implementations of the AP and PU-AP algorithms for N=1.

A number of observations can immediately be made from this interpretation of the partial-update procedure. First, any linear constraint can induce severe misadjustment from the optimal solution sought by the processor. This can manifest as both a convergent or steady-state bias from the optimal solution, and a “jitter” or fluctuation about that steady-state solution. In some applications, e.g., phased array radar applications where the received radar waveform must be extracted from strong clutter and jamming, this can cause the system to fail entirely. Even if the reference signal is received at high SINR, this can lead to well-known “hypersensitivity” issues that degrade system performance from the optimal solution.

Second, the linear constraint can only be easily added to a small subset of optimization functions. In many cases, strict enforcement of the constraint significantly increases complexity of the original method.

The subspace-constrained approach overcomes both of these problems, by replacing the hard linear constraint M0(n)w′=M0(n)w with a softer subspace constraint
M0T(n)w′∝M0T(n)w=M0T(n)wg0, g0∈ ,  (Eq33)
where the scalar held-set multiplier g0 and the update-set weights w1=M1(n)w are jointly adjusted to optimize the unconstrained criterion given in (Eq1), i.e., by adapting (M1+1)×1 enhanced weight vector

w ~ = ( w 1 g 0 )
using optimization formula

w ~ arg opt w ~ M 1 + 1 F ( w ~ ; n ) ( Eq 34 )
over each data block. The full output weight vector is then given by
w=M1(n)w1+M0(n)w0g0,  (Eq35)

    • which is efficiently computed using vector-scalar multiplies and multiply-free multiplexing (MUX) operations.

For the exemplary NLMS, AP, and BLS optimization criteria given in (Eq6), (Eq16), and (Eq26), respectively, the SCPU algorithms are implemented using the following procedure:

    • Separate w into update-set and held-set components w1 and w0 using multiply-free demultiplexing (DMX) operations.
    • For the SCPU-AP/NLMS algorithms, and for the SCPU-BLS algorithm with μ<1, construct (M1+1)×1 dimensional weight matrix

w ~ = ( w 1 1 ) . ( Eq 36 )

    • Separate X(n) into update-set and held-set components X1(n) and X0(n) using multiply-free columnar DMX operations.
    • Compute y0(n)=X0(n)w0. For the SCPU-AP/NLMS algorithms, further compute y1(n)=X1(n)w1 and y(n)=y1(n)+y0(n).
    • Construct N×(M1+1) dimensional SCPU data matrix
      {tilde over (X)}(n)=[X1(n)y0(n)].  (Eq37)
    •  Note that y(n)={tilde over (X)}(n){tilde over (w)} constructs the output data from the prior weight set.
    • Optimize {tilde over (w)} using the original unconstrained algorithm, with dimensionality reduced from M to M1+1, yielding
      SCPU-AP: {tilde over (w)}←{tilde over (w)}+μ{tilde over (X)}(n)(s(n)−y(n)), 0<μ≤1  (Eq38)
      SCPU-BLS: {tilde over (w)}←(1−μ){tilde over (w)}+μ{tilde over (X)}(n)s(n), 0<μ≤1,  (Eq39)
    •  where the SCPU-AP algorithm degenerates to SCPU-NLMS if N=1, and extends to SCPU-BLS if N≥M1.
    • Update w1 and w0 using formula
      w1←[({tilde over (w)})m]m=1M1  (Eq40)
      w0←g0w0, g0=({tilde over (w)})M1+1,  (Eq41)
    •  where ({tilde over (w)})m denotes the mth element of vector {tilde over (w)}.
    • Reconstruct the linear combiner weights using.
      w=M1(n)w1+M0(n)w0.  (Eq42)

The SCPU approach employs a data matrix with a nominal dimensionality increase of one over the equivalent PU data matrix, and requires an additional M0 complex multiplies to update the held-set weight vector. This complexity increase is substantive for the PU-NLMS algorithm, which has O(M) complexity on the data path and O(M1) complexity on the adapt path. However, this complexity increase is minor for the PU-BLS algorithm, which has O(M12) complexity on the adapt path, and for the PU-AP algorithm if the adapt block size N is less than but on the order of the number of updated weights M1 (N≲M1). Because the SCPU approach optimizes g0=({tilde over (w)})M1+1 over the complex field, the algorithm is also guaranteed to have lower misadjustment than the equivalent PU method, which constrains g0 to unity.

This implementation of a SCPU algorithm obtains a higher degree of efficiency (in comparison with either an unconstrained partial update, or full update algorithm) through reducing the level of repetitive processing and comparison which is needed to obtain the maximally-beneficial level, and mixture, of signal weightings that, when applied to the next processing effort, will produce the correct answer within the noise constraints. If applied so as to remove arbitrarily-imposed limits on either the processing depth, or on the number of criteria to be evaluated, then a satisficing level of accuracy can be reached without sacrificing the capacities which were otherwise artificially constrained. Since the weighting dimensionality is reduced by and to the level of the constraints on the subspace, without changing the data path, the efficiency of the transforming process is improved over the full analytical processing effort.

The SCPU algorithm employs a data matrix with a nominal dimensionality increase of one over the equivalent partial-update (PU) data matrix, and which employs an additional O(M0) complex scalar-vector multiplier to update the held-set weight vector. This can be expressed in the following compact matrix notation:

M ~ ( n ) = [ M 1 ( n ) M 0 ( n ) M 0 T ( n ) w ] ( Eq 43 ) x ~ ( n ) = X ( n ) M ~ ( n ) ( Eq 44 ) w ~ = arg opt w ~ M 1 + 1 F ( w ~ ; n ) ( Eq 45 ) y ( n ) = X ~ ( n ) w ~ ( Eq 46 ) w M ~ ( n ) w ~ , ( Eq 47 )
where {tilde over (M)}(n) is an M×(M1+1) sparse mapping matrix that reduces dimensionality of X(n) ahead of the optimization algorithm described symbolically in (Eq45).

This compact notation reveals some additional advantages of the approach:

    • The approach is inherently more stable than the unconstrained algorithm on a block-by-block basis, because it updates fewer weights than the unconstrained method, without introducing explicit hard constraints that lead to adaptive “jitter.” Hypersensitivity effects due to large noise subspaces in the received data should be especially reduced in the SCPU method.
    • The approach is usable with any optimization criterion, including non-quadratic criteria such as general and analytic constant modulus cost functions [Treichler83,Agee86,Van Der Veen 96], cumulant based objective functions, and eigenvalue-based objective functions [Agee89b,Agee90].
    • The approach admits both SCPU maximum-likelihood signal and parameter estimation approaches, and reduced-complexity, constrained quality metrics such as signal-to-interference-and-noise ratio (SINR), Cramer-Rao bounds on parameter estimates, and information-theoretic channel capacity. These metrics may lead to new update-set selection strategies that can overcome identified issues with methods developed to date.

The mapping given in (Eq43) can be extended in many ways to enhance other attributes of the algorithm, e.g., ability to track multiple signals, new selection strategies, and so on. In particular, the approach immediately yields nonblind multiport extensions in which adaptation algorithms are used to extract multiple signals from a received environment.

Two multiport extensions of the AP and BLS methods are taught here based on the unconstrained nonblind algorithms given by
AP: W←W+μX(n)(S(n)−Y(n)), 0<μ≤1  (Eq48)
BLS: W←(1−μ)W+μX(n)S(n), 0<μ≤1,  (Eq49)
where W is an M×P combiner matrix, Y(n)=X(n)W is an N×P matrix of combiner output data formed over adapt block n using W, and S(n) is an N×P matrix of reference data known over adapt block n. These multiport extensions include the following:

    • An uncoupled multiport extension in which (Eq43) is replaced by P separate mapping matrices
      {tilde over (M)}(n;p)=[M1(n)M0(n)M0T(n)w(p)], p=1, . . . ,P,  (Eq50)
      {tilde over (X)}(n;p)=X(n){tilde over (M)}(n;p), p=1, . . . ,P,  (Eq51)
    •  i.e., the SCPU constraint (Eq33) is broadened to P separate constraints
      M0T(n){tilde over (w)}(p)=M0T(n)w(p)g0(p), g0(p)∈, p=1, . . . ,P.  (Eq52)
    •  The uncoupled SCPU-BLS algorithm is then given by

X ~ ( n ; p ) = X ( n ) M ~ ( n ; p ) ( Eq 53 ) w ~ ( p ) = ( M 1 T ( n ) w ( p ) 1 ) ( Eq 54 ) w ~ ( p ) ( 1 - μ ) w ~ ( p ) + μ X ~ ( n ; p ) s ( n ; p ) , 0 μ 1 ( Eq 55 ) y ( n ; p ) = X ~ ( n ; p ) w ~ ( p ) ( Eq 56 ) w ( p ) M ~ ( n ; p ) w ~ ( p ) ( Eq 57 )

    •  for each port p=1, . . . , P where s(n;p) and w(p) are the pth column of S(n) and W, respectively, and where (Eq54) is only needed if μ<1.
    • A fully-coupled multiport extension, in which (Eq43) is replaced by global mapping matrix
      {tilde over (M)}(n)=[M1(n)M0(n)M0T(n)W],  (Eq58)
    •  i.e., the SCPU constraint (Eq33) is broadened to
      M0T(n)W′=M0T(n)WG0, G0P×P.  (Eq59)
    •  The fully-coupled SCPU-BLS algorithm is then given by

X ~ ( n ) = X ( n ) M ~ ( n ) , ( Eq 60 ) W ~ = ( M 1 T ( n ) W I P ) ( Eq 61 ) W ~ ( 1 - μ ) W ~ + μ X ~ ( n ) S ( n ) , 0 < μ 1 ( Eq 62 ) Y ( n ) = X ~ ( n ) W ~ ( Eq 63 ) W M ~ ( n ) W ~ , ( Eq 64 )
and where (Eq61) is only needed if μ<1.

In an efficient embodiment, the uncoupled multipart SCPU-BLS extension is implemented using whitening methods that exploit the common components of {{tilde over (X)}(n;p)}p=1P, i.e., the N×M1 dimensional update-set data matrix X1(n)=X(n)M1(n).

In particular, using the QR decomposition of {tilde over (X)}(n;p), given by

{ Q , R } = QRD ( X ) , { R = chol { X H X } Q = XR - 1 ( Eq 65 ) { X = QR Q H Q = I N ( Eq 66 )
for general N×M matrix X with rank{X}=N≥M, where IN is the N×N identity matrix and chol{⋅} is the Cholesky decomposition yielding upper-triangular matrix R with real-positive diagonal values, then the uncoupled multipart SCPU-BLS algorithm given in (Eq55) can be efficiently implemented by first computing the QRD of the common update-set data matrix,
{Q1,R11}=QRD(X1(n)),  (Eq67)
and then updating each port p using the recursion

y 0 X 0 ( n ) w 0 ( p ) ( Eq 68 ) r 10 Q 1 H y 0 ( Eq 69 ) u 1 Q 1 H s ( n ; p ) ( Eq 70 ) g 0 ( p ) y 0 H s ( n ; p ) - r 10 H u 1 y 0 2 2 - r 10 2 2 ( Eq 71 ) w 1 ( p ) ( 1 - μ ) w 1 ( p ) + μ R 11 - 1 ( u 1 - r 10 g 0 ( p ) ) , 0 < μ 1 ( Eq 72 ) g 0 ( p ) ( 1 - μ ) + μ g 0 ( p ) , 0 < μ 1 ( Eq 73 )
where

w ~ ( p ) = ( w 1 ( p ) g 0 ( p ) ) .
This recursion also admits unbiased quality statistic

γ ~ max ( n ; p ) = ( 1 - M 1 + 1 N ) ( η ~ 1 - η ~ ) - M 1 + 1 N , η ~ = u 1 2 2 + u 0 2 2 s ( n ; p ) 2 2 ( Eq 74 )
for each port p, which estimates the relative power between the port p reference signal and background clutter at the output of the port p linear combiner, also referred to as the signal-and-interference-and-noise ratio (SINR) of the combiner output signal.

The SCPU method is also easily extended to partially blind methods in which the reference vector s(n) is partially known at the receive processor over adapt block n, e.g., the reference vector has an unknown carrier or timing offset relative to the sequence contained in the input data sequence, and to fully blind methods in which the reference vector is unknown but has some known, exploitable structure. Specific examples include:

    • Carrier-timing tracking SCPU-BLS algorithms, in which s(n) has an unknown timing and/or carrier offset, e.g., due to propagation delay, Doppler shift and carrier LO uncertainty between the input data and an original transmitted signal containing the reference signal, or a combined frequency shift due to timing and carrier offset if the input data is derived from an OFDM or OFDMA demodulation process. This algorithm replaces the nonblind weight adaptation algorithm given in (Eq39) with
      {tilde over (w)}←(1−μ){tilde over (w)}+μ{tilde over (X)}(n)(s({circumflex over (n)}off;n)∘δ(ŵoff))  (Eq75)
      s(noff;n)=[s(nN+nsym+noff)]nsym=1N  (Eq76)
      δ(ωoff)=[eoffnsym]nsym=1N  (Eq77)
    •  where {s(nsym)} is a component of the transmitted signal that is known over the adapt block except for timing offset noff and a carrier offset ωoff, and where “∘” is the element-wise matrix multiplication operation. The timing and carrier offset can be optimized over each adapt block by setting

{ ω ^ off ( n ) , n ^ off ( n ) } = arg max ω off , n off η ( ω off , n off ; n ) , ( Eq 78 ) η ( ω off , n off ; n ) = Q ~ H ( n ) ( s ( n off ; n ) •δ ( ω off ) ) 2 2 s ( n off ; n ) 2 2 , = n sym = 1 N q ~ * ( n sym ) s ( nN + n sym + n off ) e j ω off n sym 2 2 n sym = 1 N s ( nN + n sym + n off ) 2 , ( Eq 80 ) ( Eq 79 )

    •  where {tilde over (Q)}=[{tilde over (q)}(1) . . . {tilde over (q)}(N)]T is the Q-component of the QRD of {tilde over (X)}(n). Equation (Eq78) can be efficiently implemented using fast Fourier transform (FFT) methods if the frequency offset ω is completely unknown (acquisition phases), and using Gauss-Newton or Newton methods if the frequency offset ω is known closely (tracking phases).
    •  Equation (Eq78) also admits quality statistic

γ ~ ( ω off , n off ; n ) = ( 1 - M 1 + 1 N ) ( η ~ ( ω off , n off ; n ) 1 - η ~ ( ω off , n off ; n ) ) - M 1 + 1 N , ( Eq 81 )

    •  which estimates the SINR of the combiner output signal.
    • Property-mapping SCPU-BLS algorithms, in which s(n) is a member of a known property set. These algorithms replace the nonblind weight adaptation algorithm given in (Eq39) with property-mapping recursion

y ( n ) = X ~ ( n ) w ~ ( Eq 82 ) s ^ ( n ) = arg min s 𝒟 ( n ) s - y ( n ) ( Eq 83 ) w ~ ( 1 - μ ) w ~ + μ X ~ ( n ) s ^ ( n ) , ( Eq 84 )

    •  where (n) is a desired signal set, potentially variable as a function of adapt block n, that s(n) is known to belong to. For example, the constant modulus property set: (n)={z∈ N: |(z)n|=1} yields
      {circumflex over (s)}(n)=sgn{y(n)}  (Eq85)
    •  where sgn{⋅} is the element-wise complex sign function sgn{z}=z/|z| on each element, resulting in an SCPU-BLS constant-modulus algorithm. Other exemplary mappings include known modulus mappings in which the elements of s(n) have known magnitude but unknown phase, and decision-direction mappings in which each element of s(n) belongs to a known set of finite values, possibly with an unknown carrier offset.
    •  In all cases, the property-mapping algorithm is applicable to cases in which s(n) does not perfectly possess the property used by the algorithm, but substantively conforms to that property, e.g., |s(nN+nsym)|≈1.
    • Dominant-mode prediction (DMP) algorithms, in which s(n) is known to be substantively present in a linear subspace with known or estimable structure, such that
      s(n)≈(Us(n)UsH(n))s(n)  (Eq86)
    •  for a known or postulated N×Ns(n) orthonormal basis Us(n), Ns(n)<N, and/or such that s(n) is known to be substantively absent a linear subspace with known or estimable structure, such that
      UH(n)s(n)≈0  (Eq87)
    •  for a known or postulated complementary N×N(n) orthonormal basis U(n), in which Ns(n)+N(n)≤N. If only one subspace is available, one can be derived from the other, for example by deriving U(n) from IN−(Us(n)UsH(n)) or vice verse.
    •  In this case, the enhanced weight update algorithm is given by

w ~ = arg max w M 1 + 1 γ ~ ( w ; n ) , ( Eq 88 ) γ ~ ( w ; n ) = w H ( X ~ s H ( n ) X ~ s ( n ) ) w w H ( X ~ H ( n ) X ~ ( n ) ) w , { X ~ s ( n ) = U s H ( n ) X ~ ( n ) X ~ ( n ) = U H ( n ) X ~ ( n ) . ( Eq 89 )

    •  The enhanced combiner weights {tilde over (w)}max that maximize (Eq89), and the maximal value of (Eq89), {tilde over (γ)}max, are equal to the dominant solution {{tilde over (γ)}1,{tilde over (w)}1} of the DMP eigenequation,
      {tilde over (γ)}m({tilde over (X)}H(n){tilde over (X)}(n)){tilde over (w)}m=({tilde over (X)}sH(n){tilde over (X)}s(n)){tilde over (w)}m, {tilde over (γ)}m≥{tilde over (γ)}m+1.  (Eq90)
    •  The dominant eigenvalue {tilde over (γ)}1 also provides an estimate of the SINR of the combiner output signal, and can be used both to detect the target signal, and to search over postulated subspaces to find the subspace that most closely contains or rejects s(n).
    •  Example subspaces include:
      • Known or postulated time slots used by s(n), such that |s(nN+nsym)|<<1/N∥s(n)∥2 over some known or searchable subset of symbol indices within the adapt block. This generates SCPU time-gated DMP (TG-DMP) algorithms.
      • Known or postulated frequency channels used by s(n), such that

n sym = 1 N - 1 h ( n sym ) s ( nN + n sym ) e - j ω n sym 2 n sym = 1 N - 1 h 2 ( n sym ) n sym = 1 N - 1 h ( n sym ) s ( nN + n sym ) 2 1 ( Eq 91 )

    •  over a known or searchable subset of frequency offsets {ω}, where {h(nsym)}nsym=1N is a lowpass windowing function, e.g., a Gaussian or Hamming window. This generates SCPU frequency-gated DMP (FG-DMP) algorithms.
      • Known or postulated CDMA codes used by s(n), such that

n sym = 1 N - 1 c * ( n sym - n off ) s ( nN + n sym ) e - j ω off n sym 2 n sym = 1 N - 1 c ( n sym - n off ) 2 n sym = 1 N - 1 s ( nN - n sym ) 2 1 ( Eq 92 )

    •  over a known or searchable subset of carrier offsets {ωoff} and timing offsets {noff}, where {c(nsym)} is a known spreading code. This generates SCPU code-gated DMP (CG-DMP) algorithms.
      • Known or postulated restricted isometry properties (RIP) possessed by s(n), such that it occupies a sparse subset of a basis Us that is known (oracular basis), or that satisfies some sparsity property (general RIP). This generates adaptive decompression algorithms in compressed sensing applications.
    • Conjugate self-coherence restoral (C-SCORE) algorithms, in which s(n) is known to have substantive conjugate self-coherence at some known or estimable frequency offset ω, such that

n sym = 1 N s 2 ( nN + n sym ) e - j ω n sym n sym = 1 N s ( nN - n sym ) 2 1 ( Eq 93 )

    •  In this case, the enhanced weight update algorithm is given by

w ~ = arg max w M 1 + 1 ρ ~ ( w | ω ; n ) , ( Eq 94 ) ρ ~ ( w | ω ; n ) = w H ( X ~ H ( n ) Δ ( ω ) X ~ * ( n ) ) w * w H ( X ~ H ( n ) X ~ ( n ) ) w , Δ ( ω ) = diag { e j ω n sym } n sym = 1 N ( Eq 95 )

    •  for a postulated twice-carrier offset ω. The enhanced combiner weights {tilde over (w)}max that maximize (Eq95), and the maximal value of (Eq89), {tilde over (ρ)}max(ω;n), are equal to the dominant solution {{tilde over (ρ)}1(ω),{tilde over (w)}1(ω)} of the C-SCORE pseudo-eigenequation,
      {tilde over (ρ)}m(ω)({tilde over (X)}H(n){tilde over (X)}(n)){tilde over (w)}m(ω)=({tilde over (X)}H(n)Δ(ω){tilde over (X)}*(n)){tilde over (w)}*m(ω), {tilde over (ρ)}m≥{tilde over (ρ)}m+1.  (Eq96)
    •  The SC-PU C-SCORE algorithm is expected to have application to BPSK, MSK, and GMSK signals, such as 1 Mbps (BPSK) 802.11 DSSS signal. The algorithm also extends to both carrier-tracking algorithms where an FFT-based search algorithm. In this case, the line spectrum used to detect the SOI's will either be the dominant pseudoeigenmode {tilde over (ρ)}max(ω;n).

Extensions of all of these algorithms to fully-coupled and uncoupled multiport SCPU methods is straightforward.

It should also be recognized that, while all of the techniques described here are defined over the “complex field,” such that w∈ M, they are equally applicable combiners and optimization metrics defined over other fields, including the real field, e.g., w∈ M, and Galois fields usable in integer field codes. In each case, the subspace constraint
M0T(n)w′∝M0T(n)w=M0T(n)wg0, g0∈,  (Eq97)
where is the field in which each element of w is defined, results in a valid SCPU method. The method is also applicable to linear-conjugate-linear (LCL) methods

M _ 0 T ( n ) w _ M _ 0 T ( n ) w _ , { w _ = 1 2 ( w w * ) M ( n ) = [ M ( n ) M ( n ) ] = M _ 0 T ( n ) w _ g _ 0 , g _ 0 = 1 2 ( g 0 g 0 * ) , ( Eq 98 )
which allows the SCPU method to be applied to optimization functions that are more complicated functions of complex variables. Moreover the techniques are applicable to processors that implement nonlinear functions on the data path as well as the adapt path, if the original optimization constraint is a linear function of w. In a preferred embodiment the step of performing a dimensionality reduction comprising a linear transformation of the processor parameters being adapted from M-dimensions to (M1+L)-dimensions in each adaptation event comprises applying a subspace constraint described in the form:
M0T(n)w′∝M0T(n)w  (Eq. 99)
=M0T(n)wg0.  (Eq. 100)

While this invention is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail several specific embodiments with the understanding that the present disclosure is to be considered as an exemplification of the principles of the invention and is not intended to limit the invention to the embodiments illustrated.

Some of the above-described functions may be composed of instructions, or depend upon and use data, that are stored on storage media (e.g., computer-readable medium). The instructions and/or data may be retrieved and executed by the processor. Some examples of storage media are memory devices, tapes, disks, and the like. The instructions are operational when executed by the processor to direct the processor to operate in accord with the invention; and the data is used when it forms part of any instruction or result therefrom.

The terms “computer-readable storage medium” and “computer-readable storage media” as used herein refer to any medium or media that participate in providing instructions to a CPU for execution. Such media can take many forms, including, but not limited to, non-volatile (also known as ‘static’ or ‘long-term’) media, volatile media and transmission media. Non-volatile media include, for example, one or more optical or magnetic disks, such as a fixed disk, or a hard drive. Volatile media include dynamic memory, such as system RAM or transmission or bus ‘buffers’. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, any other magnetic medium, a CD-ROM disk, digital video disk (DVD), any other optical medium, any other physical medium with patterns of marks or holes.

“Memory”, as used herein when referencing to computers, is the functional hardware that for the period of use retains a specific structure which can be and is used by the computer to represent the coding, whether data or instruction, which the computer uses to perform its function. Memory thus can be volatile or static, and be any of a RAM, a PROM, an EPROM, an EEPROM, a FLASHEPROM, any other memory chip or cartridge, a carrier wave, or any other medium from which a computer can read data, instructions, or both.

“I/O”, or ‘input/output’, is any means whereby the computer can exchange information with the world external to the computer. This can include a wired, wireless, acoustic, infrared, or other communications link (including specifically voice or data telephony); a keyboard, tablet, camera, video input, audio input, pen, or other sensor; and a display (2D or 3D, plasma, LED, CRT, tactile, or audio). That which allows another device, or a human, to interact with and exchange data with, or control and command, a computer, is an I/O device, without which any computer (or human) is essentially in a solipsistic state.

The above description of the invention is illustrative and not restrictive. Many variations of the invention may become apparent to those of skill in the art upon review of this disclosure. The scope of the invention should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.

While the present invention has been described in connection with at least one preferred embodiment, these descriptions are not intended to limit the scope of the invention to the particular forms (whether elements of any device or architecture, or steps of any method) set forth herein. It will be further understood that the elements, or steps in methods, of the invention are not necessarily limited to the discrete elements or steps, or the precise connectivity of the elements or order of the steps described, particularly where elements or steps which are part of the prior art are not referenced (and are not claimed). To the contrary, the present descriptions are intended to cover such alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims and otherwise appreciated by one of ordinary skill in the art.

Claims

1. A method for digital signal processing on a device comprising at least one adaptive processor both employing large numbers of adaptation weights (‘M’) and implementing partial-update using adapt-path operations to tune said adaptive processor according to any adaptation criterion, to process a set of signals during and after tuning, said method comprising:

operating with lower update set sizes for each adaptation event having M-dimensions by: performing a dimensionality reduction comprising a linear transformation of any processor parameters being adapted from M-dimensions to (M1+L)-dimensions in said adaptation event, wherein M1 weights are adapted without constraints; and, M0=M−M1 weights are subjected to L soft constraints that force those M0 weights into an L-dimensional subspace spanned by those M0 weights; applying said dimensionality reduction to any input data; and, then adapting the reduced-dimensionality weights using exactly that same optimization strategy employed by the adaptive processor, except with said input data to which said dimensionality reduction has been applied;
thereby effecting in said digital signal processing on said adaptive processor any of the set of: reduction or elimination of misadjustment effects, reduction in complexity and therefore cost of processing, reducing the repetitive processing and comparison, improving efficiency, and providing an inherently more stable approach on a block-by-block basis without introducing explicit hard constraints that lead to jitter.

2. A method as in claim 1, wherein the step of performing a dimensionality reduction comprising a linear transformation of the processor parameters being adapted from M-dimensions to (M1+L)-dimensions in each adaptation event further comprises: w ← arg ⁢ opt w ′ ⁢ F ⁡ ( w ′; n ); and,

replacing any hard linear constraint M0(n)w′=M0(n)w that forces M0=M−M1 weights to remain at the same value between adaptation blocks with a softer subspace constraint M0T(n)w′∞M0T(n)w=M0T(n)wg0;
jointly adjusting the scalar hand-set multiplier g0 and the update-set weights w1=M1(n)w to optimize an unconstrained criterion F(w;n) over adapt block n
efficiently computing a full output weight vector w=M1(n)w1+M0(n)w0g0
in the adaptation event.

3. A method as in claim 2 wherein the full output weight vector is efficiently computed using vector-scalar multiplies and multiply-free multiplexing (MUX) operations.

4. A method as in claim 2, wherein the step of jointly adjusting the scalar hand-set multiplier g0 and the update-set weights w1=M1(n)w to optimize an unconstrained criterion metric F(w;n) over adapt block n further comprises: w ~ =   ( w 1 g 0 ) using optimization formula w ~ ← arg ⁢ opt w ~ ′ ∈ ℂ M 1 + 1 ⁢ F ⁡ ( w ~ ′; n )

using a (M1+1)×1 enhanced weight vector
over each data block to produce a full output weight vector w=M1(n)w1+M0(n)w0g0.

5. A method as in claim 4 wherein the full output weight vector is efficiently computed using vector-scalar multiplies and multiply-free multiplexing (MUX) operations.

6. A method as in claim 1 for partially blind partial update methods in which a reference vector s(n) is partially known at the receive processor over adapt block n, as the reference vector has any of an unknown carrier and timing offset relative to the sequence contained in the input data sequence.

7. A method as in claim 6 for any of timing and carrier tracking methods, wherein s(n) has an unknown offset between the input data and an original transmitted signal containing the reference signal, further comprising: { ω ^ off ⁡ ( n ), n ^ off ⁡ ( n ) } = arg ⁢ max ω off, n off ⁢ η ⁡ ( ω off, n off; n ), ⁢ η ⁡ ( ω off, n off; n ) = ⁢  Q ~ H ⁡ ( n ) ⁢ ( s ⁡ ( n off; n ) ⁢ •δ ⁡ ( ω off ) )  2 2  s ⁡ ( n off; n )  2 2, = ⁢  ∑ n sym = 1 N ⁢ ⁢ q ~ * ⁡ ( n sym ) ⁢ s ⁡ ( nN + n sym + n off ) ⁢ e j ⁢ ⁢ ω off ⁢ n sym  2 2 ∑ n sym = 1 N ⁢ ⁢  s ⁡ ( nN + n sym + n off )  2,

replacing the nonblind weight adaptation algorithm {tilde over (w)}←(1−μ){tilde over (w)}+μ{tilde over (X)}†(n)s(n), 0<μ≤1 with {tilde over (w)}←(1−μ){tilde over (w)}+μ{tilde over (X)}†(n)(s({circumflex over (n)}off;n)∘δ({circumflex over (ω)}off)) s(noff;n)=[s(nN+nsym+noff)]nsym=1N δ(ωoff)=[ejωoffnsym]nsym=1N
where {s(nsym)} is a component of the transmitted signal that is known over the adapt block except for the unknown offset;
and,
optimizing the unknown offset over each adapt block by setting
where {tilde over (Q)}=[{tilde over (q)}(1)... {tilde over (q)}(N)]T is the Q-component of the QRD of {tilde over (X)}(n)
using fast Fourier transform (FFT) methods if the frequency offset ω is completely unknown (acquisition phases), and using Gauss-Newton or Newton methods if the frequency offset ω is known closely (tracking phases).

8. A method as in claim 1 for fully blind methods in which a reference vector s(n) is unknown but has some known, exploitable structure.

9. A method as in claim 1 for property-mapping methods in which a reference vector s(n) is a member of a known property set, said method further comprising: w ~ ← ( 1 - μ ) ⁢ w ~ + μ ⁢ ⁢ X ~ † ⁡ ( n ) ⁢ s ⁡ ( n ), ⁢ 0 < μ ≤ 1 with y ⁡ ( n ) = X ~ ⁡ ( n ) ⁢ w ~, ⁢ s ^ ⁡ ( n ) = arg ⁢ min s ∈ 𝒟 ⁡ ( n ) ⁢  s - y ⁡ ( n ) , ⁢ and w ~ ← ( 1 - μ ) ⁢ w ~ + μ ⁢ ⁢ X ~ † ⁡ ( n ) ⁢ s ^ ⁡ ( n );

replacing the non-blind weight adaptation algorithm
where (n) is a desired signal set, potentially variable as a function of adapt block n, that s(n) is known to belong to.

10. A method as in claim 1 for dominant-mode prediction (DMP) methods, in which a reference vector s(n) is known to be substantively present in a linear subspace with any of a known or estimable structure, said method further comprising: w ~ = arg ⁢ max w ∈ ℂ M 1 + 1 ⁢ γ ~ ⁡ ( ω; n ), ⁢ γ ~ ⁡ ( w; n ) = w H ⁡ ( X ~ s H ⁡ ( n ) ⁢ X ~ s ⁡ ( n ) ) ⁢ w w H ⁡ ( X ~ ⊥ H ⁡ ( n ) ⁢ X ~ ⊥ ⁡ ( n ) ) ⁢ w, { X ~ s ⁡ ( n ) = U s H ⁡ ( n ) ⁢ X ~ ⁡ ( n ) X ~ ⊥ ⁡ ( n ) = U ⊥ H ⁡ ( n ) ⁢ X ~ ⁡ ( n )

using an enhanced weight algorithm effecting
wherein the enhanced combiner weights {tilde over (w)}max and the maximal value of {tilde over (γ)}max, are equal to the dominant solution {{tilde over (γ)}1,{tilde over (w)}1} of the DMP eigenequation {tilde over (γ)}m({tilde over (X)}⊥H(n){tilde over (X)}⊥(n)){tilde over (w)}m=({tilde over (X)}sH(n){tilde over (X)}s(n)){tilde over (w)}m, {tilde over (γ)}m≥{tilde over (γ)}m+1
and the dominant eigenvalue {tilde over (γ)}1 also provides an estimate of the SINR of the combiner output signal,
such that the dominant eigenvalue {tilde over (γ)}1 also is usable both to detect the target signal, and to search over postulated subspaces to find the subspace that most closely contains or rejects s(n).

11. A method as in claim 1 for conjugate self-coherence restoral (C-SCORE) methods in which a reference vector s(n) is known to have substantive conjugate self-coherence at some known or estimable frequency offset ω, such that  ∑ n sym = 1 N ⁢ ⁢ s 2 ⁡ ( nN + n sym ) ⁢ e - j ⁢ ⁢ ω ⁢ ⁢ n sym  ∑ n sym = 1 N ⁢ ⁢  s ⁡ ( nN - n sym )  2 ≈ 1 said method further comprising: w ~ = arg ⁢ max w ∈ ℂ M 1 + 1 ⁢ ρ ~ ⁡ ( w | ω; n ), ρ ~ ⁡ ( w | ω; n ) =  w H ⁡ ( X ~ H ⁡ ( n ) ⁢ Δ ⁡ ( ω ) ⁢ X ~ * ⁡ ( n ) ) ⁢ w *  w H ⁡ ( X ~ H ⁡ ( n ) ⁢ X ~ ⁢ ( n ) ) ⁢ w, ⁢ Δ ⁡ ( ω ) = diag ⁢ { e j ⁢ ⁢ ω ⁢ ⁢ n sym } n sym = 1 N

using for the enhanced weight algorithm for a postulated twice-carrier offset ω,
wherein the enhanced combiner weights {tilde over (w)}max and the maximal value {tilde over (w)}max are equal to the dominant solution {{tilde over (ρ)}1(ω),{tilde over (w)}1(ω)} of the C-SCORE pseudo-eigenequation, {tilde over (ρ)}m(ω)({tilde over (X)}H(n){tilde over (X)}(n)){tilde over (w)}m(ω)=({tilde over (X)}H(n)Δ(ω){tilde over (X)}*(n)){tilde over (w)}*m(ω), {tilde over (ρ)}m≥{tilde over (ρ)}m+1.

12. A method as in claim 1, wherein any adaptation criterion further comprise any of a signal estimation, parameter estimation, measured quality and any combination thereof.

13. A method for digital signal processing implementing partial-update methods (PUMs) in any adaptive processor that adjusts weights to optimize an adaptation criterion using any of a signal estimation and a parameter estimation algorithm when said adaptive processor employs large numbers of adaptation weights for any adaptation criterion of the set of signal estimation, parameter estimation, measured quality, and any combination thereof for “adapt-path” operations used to tune the adaptive processor, not “data-path” operations used by the adaptive processor during and after tuning, said method comprising:

for each adaptation event comprising a partial-update effected by the adaptive processor for an adapt path operation having M-dimensions, operating with lower update set sizes by: performing a dimensionality reduction comprising a linear transformation of the processor parameters being adapted from M-dimensions to (M1+L)-dimensions in said partial update, wherein M1 weights are adapted without constraints; and, M0=M−M1 weights are subjected to L soft constraints that forces those M0 weights into an L-dimensional subspace spanned by those M0 weights; applying said dimensionality reduction to any input data using that linear transformation; and, then adapting the reduced-dimensionality weights using exactly that same optimization strategy employed by the adaptive processor, except with said input data to which said dimensionality reduction has been applied;
thereby effecting in said digital signal processing on said adaptive processor any of the set of: reduction or elimination of misadjustment effects, reduction in complexity and therefore cost of processing, reducing the repetitive processing and comparison, improving efficiency, and providing an inherently more stable approach on a block-by-block basis without introducing explicit hard constraints that lead to jitter.

14. A method as in claim 13, wherein the step of performing a dimensionality reduction comprising a linear transformation of the processor parameters being adapted from M-dimensions to (M1+L)-dimensions in each adaptation event further comprises: w ← arg ⁢ opt w ′ ⁢ F ⁡ ( w ′; n )

replacing a hard linear constraint describable in the form M0(n)w′=M0(n)w with a softer subspace constraint described in the form M0T(n)w′∞M0T(n)w=M0T(n)wg0;
jointly adjusting the scalar hand-set multiplier g0 and the update-set weights w1=M1(n)w to optimize an unconstrained criterion metric F(w;n) over adapt block n, said metric determined by
subject to additional linear constraint (w′)m∈M0(n)=(w)m∈M0(n);
and,
using the resulting full output weight vector in the adaptation event.

15. A method as in claim 13 for partial-update affine projections, further comprising: w ~ = ( w 1 1 );

separating w into update-set and held-set components w1 and w0 using multiply-free demultiplexing (DMX) operations;
for subspace-constrained partial update-adapt-path/normalized least-mean-squares ((SCPU-AP/NLMS) subspace-constrained partial update block least-squares (SCPU-BLS) algorithms with μ<1, constructing a (M1+1)×1 dimensional weight matrix
separating X(n) into update-set and held-set components X1(n) and X0(n) using multiply-free columnar DMX operations;
computing y0(n)=X0(n)w0;
specifically for any SCPU-AP/NLMS algorithm, further computing y1(n)=X1(n)w1 and y(n)=y1(n)+y0(n);
constructing a N×(M1+1) dimensional SCPU data matrix {tilde over (X)}(n)=[X1(n)y0(n)];
optimizing {tilde over (w)} using the original unconstrained algorithm, with dimensionality reduced from M to M1+1;
updating w1 and w0 using w1←[({tilde over (w)})m]m=1M1, w0←({tilde over (w)})M1+1w0; and, reconstructing the linear combiner weights using w=M1(n)w1+M0(n)w0.

16. A method as in claim 13 wherein the adaptation arises from and must apply over a multiport digital signal processing hardware any of an affine-projection and block least-squares adaptation algorithm.

17. A method as in claim 16 wherein the multiport digital signal processing is uncoupled.

18. A method as in claim 16 wherein the multiport digital signal processing is fully-coupled.

19. A method for digital signal processing implementing partial-update methods on devices employing at least one adaptive processor that employs large numbers of adaptation weights for any adaptation criterion of the set of signal estimation, parameter estimation, measured quality, and any combination thereof for “adapt-path” operations used to tune the adaptive processor, not “data-path” operations used by the adaptive processor during and after tuning, said method comprising: thereby effecting in said digital signal processing on said adaptive processor any of the set of: reduction or elimination of misadjustment effects, reduction in complexity and therefore cost of processing, reducing the repetitive processing and comparison, improving efficiency, and providing an inherently more stable approach on a block-by-block basis without introducing explicit hard constraints that lead to jitter.

operating with lower update set sizes for each adaptation event having M-dimensions by: performing a dimensionality reduction comprising a linear transformation of the processor parameters being adapted from M-dimensions to (M1+L)-dimensions in that adaptation event wherein: M1 weights are adapted without constraints; and, M0=M−M1 weights are subjected to L soft constraints that forces those M0 weights into an L-dimensional subspace spanned by those M0 weights; applying said dimensionality reduction to any input data using that linear transformation; and,
then adapting the reduced-dimensionality weights using substantively the same optimization strategy employed by the adaptive processor for the input data to which the same dimensionality reduction has been applied;

20. A method for digital signal processing implementing partial-update methods on devices employing at least one adaptive processor that employs large numbers of adaptation weights for any adaptation criterion of the set of signal estimation, parameter estimation, measured quality, and any combination thereof for “adapt-path” operations used to tune the adaptive processor, not “data-path” operations used by the adaptive processor during and after tuning, when there exists any of hardware, software, or combined hardware and software differentiation in “adapt-path” operations used to tune the adaptive processor and “data-path” operations used by the adaptive processor during and after tuning, said method comprising:

operating with lower update set sizes for each adaptation event having M-dimensions, by: performing a dimensionality reduction comprising a linear transformation of the processor parameters being adapted from M-dimensions to (M1+L)-dimensions in that adaptation event wherein: M1 weights are adapted without constraints; and, M0=M−M1 weights are subjected to L soft constraints that forces those M0 weights into an L-dimensional subspace spanned by those M0 weights; applying said dimensionality reduction to any input data using that linear transformation; and, then adapting the reduced-dimensionality weights using substantively that optimization strategy employed by the adaptive processor for the input data to which the same dimensionality reduction has been applied;
thereby effecting in said digital signal processing on said adaptive processor any of the set of: reduction or elimination of misadjustment effects, reduction in complexity and therefore cost of processing, reducing the repetitive processing and comparison, improving efficiency, and providing an inherently more stable approach on a block-by-block basis without introducing explicit hard constraints that lead to jitter.
Referenced Cited
U.S. Patent Documents
4422175 December 20, 1983 Bingham
5477534 December 19, 1995 Kusano
9231561 January 5, 2016 Choi
20040015529 January 22, 2004 Tanrikulu
Other references
  • Godavarti et al., “Partial Update LMS Algorithms”, IEEE 2005, pp. 2382-2399.
  • J. Nagumo, A. Noda, “A Learning Method for System Identification,” IEEE Trans. Auto. Control, vol. AC-12, No. 3, pp. 282-287, Jun. 1967.
  • S. Douglas, “Simplified Stochastic Gradient Adaptive Filters using Partial UPdating”, Proc. Sixth IEEE Digital Signal Processing Wkshp, pp. 265-268, Oct. 1994.
  • S. Douglas, “Analysis and Implementation of the Max-NLMS Adaptive Filter”, Proc. 29th Asilomar Conf. Signals, Systems and Computers, vol. 1, pp. 659-663, Oct. 1995.
  • D. Jones, “A Normalized Constant Modulus Algorithm”, Proc. 29th Asilomar Conf. Signals, Systems and Computers, vol. 1, pp. 694-697, Oct. 1995.
  • S. Douglas, “Adaptive Filters Employing Partial Updates”, IEEE Trans. Circuits and Systems II: Analog and Digital Signal Processing, vol. 44, No. 3, pp. 206-219, Mar. 1997.
  • J. Minglu, “Partial Updating RLS Algorithm”, Proc. 7th Int'l Conf. Signal Processing, vol. 1, pp. 392-395, Aug. 2004.
  • G. Deng, “Partial Update and Sparse Adaptive Filters”, IET Signal Processing, vol. 1, No. 1, pp. 9-17, Mar. 2007.
Patent History
Patent number: 9928212
Type: Grant
Filed: Nov 1, 2014
Date of Patent: Mar 27, 2018
Patent Publication Number: 20170255593
Inventor: Brian G. Agee (San Jose, CA)
Primary Examiner: Chuong D Ngo
Application Number: 14/121,895
Classifications
Current U.S. Class: With Control Of Equalizer And/or Delay Network (333/18)
International Classification: G06F 17/14 (20060101); H03H 21/00 (20060101); G06F 17/16 (20060101);