首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Importance sampling for characterizing STAP detectors   总被引:1,自引:0,他引:1  
This paper describes the development of adaptive importance sampling (IS) techniques for estimating false alarm probabilities of detectors that use space-time adaptive processing (STAP) algorithms. Fast simulation using IS methods has been notably successful in the study of conventional constant false alarm rate (CFAR) radar detectors, and in several other applications. The principal objectives here are to examine the viability of using these methods for STAP detectors, develop them into powerful analysis and design algorithms and, in the long term, use them for synthesizing novel detection structures. The adaptive matched filter (AMF) detector has been analyzed successfully using fast simulation. Of two biasing methods considered, one is implemented and shown to yield good results. The important problem of detector threshold determination is also addressed, with matching outcome. As an illustration of the power of these methods, two variants of the square-law AMF detector that are thought to be robust under heterogeneous clutter conditions have also been successfully investigated. These are the envelope-law and geometric-mean STAP detectors. Their CFAR property is established and performance evaluated. It turns out the variants have detection performances better than those of the AMF detector for training data contaminated by interferers. In summary, the work reported here paves the way for development of advanced estimation techniques that can facilitate design of powerful and robust detection algorithms  相似文献   

2.
为消除外测数据处理中异常值和噪声信号对处理结果的影响,结合数据处理的实际,给出一种基于小波变换的鲁棒性滤波算法。首先用移动中值滤波算法剔除原始数据中的异常值,然后采用小波系数去噪算法并结合经验维纳阈值滤波算法,抑制数据中的噪声。仿真计算及实际工程应用表明,该算法在保留特征段及有用信息的同时,有效地剔除了异常值,抑制了噪声,具有很好的鲁棒性。  相似文献   

3.
Robust adaptive matched filtering (AMF) whereby outlier data vectors are censored from the covariance matrix estimate is considered in a maximum likelihood estimation (MLE) setting. It is known that outlier data vectors whose steering vector is highly correlated with the desired steering vector, can significantly degrade the performance of AMF algorithms such as sample matrix inversion (SMI) or fast maximum likelihood (FML). Four new algorithms that censor outliers are presented which are derived via approximation to the MLE solution. Two algorithms each are related to using the SMI or the FML to estimate the unknown underlying covariance matrix. Results are presented using computer simulations which demonstrate the relative effectiveness of the four algorithms versus each other and also versus the SMI and FML algorithms in the presence of outliers and no outliers. It is shown that one of the censoring algorithms, called the reiterative censored fast maximum likelihood (CFML) technique is significantly superior to the other three censoring methods in stressful outlier scenarios.  相似文献   

4.
针对经典的初轨计算方法在极短弧定轨中不适用的情况,建立了一种基于粒子群算法的极短弧(TooShort-Arc,TSA)定轨的计算方法。该方法将问题转化为两个三变量的分层优化问题,采用(a,e,M)作为优选变量,在保持问题维数较低的同时,实现了计算结果和观测资料的解耦。由于实测资料处理中的野值剔除方法不适用于粒子群算法,所以,采用稳健估计法,通过在适值函数中使用最小中值二乘准则,实现了稳健的极短弧计算方法。同时,应用MATLAB计算软件,选用缺省参数实现该算法,以进行数据验证。基于实测数据的数值验证表明,方法对于近圆轨道目标30s以下的弧段仍可以获得有效的结果,10s弧段误差仅为16km。此精度满足后续处理的需要,且方法稳健,具有很高的崩溃点。  相似文献   

5.
Deep learning-based methods have achieved remarkable success in object detection, but this success requires the availability of a large number of training images. Collecting sufficient training images is difficult in detecting damages of airplane engines. Directly augmenting images by rotation, flipping, and random cropping cannot further improve the generalization ability of existing deep models. We propose an interactive augmentation method for airplane engine damage images using a prior-guide...  相似文献   

6.
Reiterative median cascaded canceler for robust adaptive array processing   总被引:1,自引:0,他引:1  
A new robust adaptive processor based on reiterative application of the median cascaded canceler (MCC) is presented and called the reiterative median cascaded canceler (RMCC). It is shown that the RMCC processor is a robust replacement for the sample matrix inversion (SMI) adaptive processor and for its equivalent implementations. The MCC, though a robust adaptive processor, has a convergence rate that is dependent on the rank of the input interference-plus-noise covariance matrix for a given number of adaptive degrees of freedom (DOF), N. In contrast, the RMCC, using identical training data as the MCC, exhibits the highly desirable combination of: 1) convergence-robustness to outliers/targets in adaptive weight training data, like the MCC, and 2) fast convergence performance that is independent of the input interference-plus-noise covariance matrix, unlike the MCC. For a number of representative examples, the RMCC is shown to converge using ~ 2.8N samples for any interference rank value as compared with ~ 2N samples for the SMI algorithm. However, the SMI algorithm requires considerably more samples to converge in the presence of outliers/targets, whereas the RMCC does not. Both simulated data as well as measured airborne radar data from the multichannel airborne radar measurements (MCARM) space-time adaptive processing (STAP) database are used to illustrate performance improvements over SMI methods.  相似文献   

7.
宫晓琳  房建成 《航空学报》2009,30(12):2348-2353
 针对位置姿态系统(POS)应用中全球定位系统(GPS)野值会降低滤波精度和稳定性的问题,提出将基于新息正交性的Kalman滤波(KF)抗野值法应用于POS数据处理中。该方法首先通过判断KF新息过程的正交性是否丧失来判别GPS的位置和速度数据中是否出现野值,然后采用活化函数对含有野值的量测值进行加权限制,使修正后的新息过程能够保持正交性质,从而达到辨识并修正GPS野值的目的。车载试验结果表明,该方法能够有效辨识并抑制GPS野值对滤波精度和稳定性的不利影响,其在GPS野值点处的位置、速度精度比标准KF提高了1~2个数量级。  相似文献   

8.
大数据时代面临的数据维数越来越高,对数据降维处理越发显得重要。经典的主成分分析模型已被证明是一种有效的数据降维方法。但它在处理非线性、存在噪声和异常点的数据时存在效果较差的问题。对此,文章提出了一种鲁棒概率核主成分分析模型。该模型将核方法与基于高斯隐变量模型的极大似然框架相结合,用多元 t分布作为先验分布,以同时解决主成分分析在这 3个方面的弊端。提出混合鲁棒概率核主成分分析模型,使其可直接用于对混合的非线性数据进行降维和聚类分析。在不同数据集上进行的实验结果表明,与标准的混合概率核主成分分析模型相比,文中模型在数据聚类方面有更高的准确率。  相似文献   

9.
Median cascaded canceller for robust adaptive array processing   总被引:2,自引:0,他引:2  
A median cascaded canceller (MCC) is introduced as a robust multichannel adaptive array processor. Compared with sample matrix inversion (SMI) methods, it is shown to significantly reduce the deleterious effects of impulsive noise spikes (outliers) on convergence performance of metrics; such as (normalized) output residue power and signal to interference-plus-noise ratio (SINR). For the case of no outliers, the MCC convergence performance remains commensurate with SMI methods for several practical interference scenarios. It is shown that the MCC offers natural protection against desired signal (target) cancellation when weight training data contains strong target components. In addition, results are shown for a high-fidelity, simulated, barrage jamming and nonhomogenous clutter environment. Here the MCC is used in a space-time adaptive processing (STAP) configuration for airborne radar interference mitigation. Results indicate the MCC produces a marked SINR performance improvement over SMI methods.  相似文献   

10.
提出了一种基于模糊贴近度的多站数据融合算法。该算法利用模糊贴近度的概念和矩阵特征向量理论,确定多站数据融合算法的各个测量设备的权重。能充分利用测量数据,提高目标跟踪精度。与传统方法相比,该方法计算简便,便于工程实现。  相似文献   

11.
The MAX family of constant-false-alarm-rate (CFAR) detectors is introduced as a generalization of the greatest of CFAR (GO-CFAR) or MX mean-level detector (MX-MLD). Members of the MAX family use local estimators based on order statistics and generate both a near-range and a far-range noise-level estimate. Local estimates are always combined through a maximum operation; this insures false-alarm control at clutter edges. At the same time, order-statistic-based estimators result in a high-resolution detector. A complete detection analysis is provided for SWII targets and a reference channel contaminated by large outliers. Results are presented for the MX censored MLD (MX-CMLD) operating in clutter. The MX order statistic detector (MX-OSD) based on only a single-order statistic per window, is analyzed, and curves showing the required threshold, CFAR loss, optimum censoring point, and signal-to-noise ratio (SNR) loss in the presence of outliers are given. Simulations are used to compare the dynamic responses of various MX-OSD detectors in a clutter and a multiple-target environment  相似文献   

12.
针对地磁测量野值的辨识与剔除方法进行了深入研究。分析了野值辨识与剔除的基本原理,基于孤立型野值和斑点型野值模型分析比较了不同的野值剔除方法。利用地磁场实测信息,优化了野值剔除方法的参数,并从不同角度验证了方法的野值修复效果。仿真结果表明,在观测信息为总强度地磁信息条件下,优化的最小二乘B样条逼近法能够有效地辨识野值,具有较好的野值修复效果。  相似文献   

13.
《中国航空学报》2023,36(8):269-283
Most of the current object detection algorithms use pretrained models that are trained on ImageNet and then fine-tuned in the network, which can achieve good performance in terms of general object detectors. However, in the field of remote sensing image object detection, as pretrained models are significantly different from remote sensing data, it is meaningful to explore a train-from-scratch technique for remote sensing images. This paper proposes an object detection framework trained from scratch, SRS-Net, and describes the design of a densely connected backbone network to provide integrated hidden layer supervision for the convolution module. Then, two necessary improvement principles are proposed: studying the role of normalization in the network structure, and improving data augmentation methods for remote sensing images. To evaluate the proposed framework, we performed many ablation experiments on the DIOR, DOTA, and AS datasets. The results show that whether using the improved backbone network, the normalization method or training data enhancement strategy, the performance of the object detection network trained from scratch increased. These principles compensate for the lack of pretrained models. Furthermore, we found that SRS-Net could achieve similar to or slightly better performance than baseline methods, and surpassed most advanced general detectors.  相似文献   

14.
A design method is proposed for a class of nonparametric truncated sequential detectors. These detectors test nonparametric statistics against two parallel linear boundaries with an abrupt truncation at some sample size. The proposed method obtains the asymptotic relative efficiencies (ARE) of these tests with respect to their corresponding fixed-sample-size (FSS) tests in terms of some parameters of the tests. There parameters are then chosen to optimize the ARE. This (asymptotically) optimal set of parameters is used to design the thresholds of the sequential tests. Numerical results are obtained and design examples are presented, using the sum of the signs of the observations as the test statistic. The method can be used for nonparametric sequential detectors and for robust and parametric sequential detectors as well  相似文献   

15.
在工程应用中,量测异常及量测噪声统计特性的时变是引起标准卡尔曼滤波振荡甚至发散的主要原因。经典抗差Sage-Husa自适应滤波方案,对量测中的孤立型异常有所抵抗,并可在线估计量测噪声统计特性改善滤波效果,但当连续型异常值出现时,其滤波效果不佳。针对现有抗差Sage-Husa自适应滤波方案的不足,提出了新的改进滤波方法。在改进算法中,当检测到量测异常时采用模值更大的先验预测方差阵代替原算法中的后验估计方差阵,在估计量测噪声方差时起到放大作用,以降低异常量测权重,提高滤波精度;采用IGG方案构造了新的权函数,可在抑制异常影响的同时调节估计方差阵,以免连续异常时新息持续置零引起的滤波发散;采用标准卡尔曼滤波新息辅助异常检测的双重检测策略,避免了因量测噪声方差阵的调节引起检测阈值变化而导致的漏检率增高。仿真实验表明,与常规抗差自适应滤波算法相比,该方案可更加有效地抑制量测异常值的影响。  相似文献   

16.
在脱靶量数据处理中,野点的存在对采用非线性最小二乘法估计多普勒频率的精度会产生恶化影响。本文研究利用野值点和多普勒频率在小波域上系数的不同,对不同数据段的系数设定不同阈值进行处理,从而达到去除野值点和恢复多普勒频率的目的。试验结果表明,采用小波变换去除了随机测量误差与野值点对多普勒频率的影响,对于后续的脱靶量参数估计没有任何不利影响。  相似文献   

17.
基于归并聚类中心的思想,将全部样本作为初始聚类中心,以离差隶属度作为计算聚类中心的因素,用最大类间距离作为归并聚类中心的标准,进而确定出聚类的数目和最终聚类中心,得出聚类结果.通过实验数据的验证表明,本方法得出的聚类结果能够有效的反映出待聚类样本的真实情况,并且与待聚类样本的初始顺序无关,同时具有一定的抗噪能力.  相似文献   

18.
Robust Preprocessing for Kalman Filtering of Glint Noise   总被引:1,自引:0,他引:1  
The non-Gaussian character of glint noise is demonstrated by exploratory data analysis. This non-Gaussian behavior is characterized by outliers in the form of glint spikes. Since glint noise is processed by an angle-tracking Kalman filter, and since the latter is quite nonrobust, strategies are proposed to minimize the effect of these glint spikes. One of the strategies, which involves robust preprocessing of the data, is pursued in detail. Finally, some results of a planar missile simulation are presented that clearly demonstrate the merits of the robust preprocessing strategy.  相似文献   

19.
User-level reliability monitoring in urban personal satellite-navigation   总被引:1,自引:0,他引:1  
Monitoring the reliability of the obtained user position is of great importance, especially when using the global positioning system (GPS) as a standalone system. In the work presented here, we discuss reliability testing, reliability enhancement, and quality control for global navigation satellite system (GNSS) positioning. Reliability testing usually relies on statistical tests for receiver autonomous integrity monitoring (RAIM) and fault detection and exclusion (FDE). It is here extended by including an assessment of the redundancy and the geometry of the obtained user position solution. The reliability enhancement discussed here includes rejection of possible outliers, and the use of a robust estimator, namely a modified Danish method. We draw special attention to navigation applications in degraded signal-environments such as indoors where typically multiple errors occur simultaneously. The results of applying the discussed methods to high-sensitivity GPS data from an indoor experiment demonstrate that weighted estimation, FDE, and quality control yield a significant improvement in reliability and accuracy. The accuracy actually obtained was by 40% better than with equal weights and no FDE; the rms value of horizontal errors was reduced from 15 m to 9 m, and the maximum horizontal errors were largely reduced.  相似文献   

20.
Many existing aircraft engine fault detection methods are highly dependent on performance deviation data that are provided by the original equipment manufacturer. To improve the independent engine fault detection ability, Aircraft Communications Addressing and Reporting System (ACARS) data can be used. However, owing to the characteristics of high dimension, complex correlations between parameters, and large noise content, it is difficult for existing methods to detect faults effectively by using ACARS data. To solve this problem, a novel engine fault detection method based on original ACARS data is proposed. First, inspired by computer vision methods, all variables were divided into separated groups according to their correlations. Then, an improved convolutional denoising autoencoder was used to extract the features of each group. Finally, all of the extracted features were fused to form feature vectors. Thereby, fault samples could be identified based on these feature vectors. Experiments were conducted to validate the effectiveness and efficiency of our method and other competing methods by considering real ACARS data as the data source. The results reveal the good performance of our method with regard to comprehensive fault detection and robustness. Additionally, the computational and time costs of our method are shown to be relatively low.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号