首页 | 本学科首页   官方微博 | 高级检索  
     

基于无监督学习的多模态可变形配准
引用本文:马腾宇,李孜,刘日升,樊鑫,罗钟铉. 基于无监督学习的多模态可变形配准[J]. 北京航空航天大学学报, 2021, 47(3): 658-664. DOI: 10.13700/j.bh.1001-5965.2020.0449
作者姓名:马腾宇  李孜  刘日升  樊鑫  罗钟铉
作者单位:大连理工大学 国际信息与软件学院, 大连 116000
基金项目:国家自然科学基金;兴辽英才计划
摘    要:针对医学图像配准问题,传统方法提出通过解决优化问题进行配准,但计算成本高、运行时间长。深度学习方法提出使用网络学习配准参数,从而进行配准并在单模态图像上取得高效性能。但在多模态图像配准时,不同模态图像的强度分布未知且复杂,大多已有方法严重依赖标签数据,现有方法不能完全解决此问题。提出一种基于无监督学习的深度多模态可变形图像配准框架。该框架由基于损失映射量的特征学习和基于最大后验概率的变形场学习组成,借助空间转换函数和可微分的互信息损失函数实现无监督训练。在MRI T1、MRI T2以及CT的3D多模态图像配准任务上,将所提方法与现有先进的多模态配准方法进行比较。此外,还在最新的COVID-19的CT数据上展示了所提方法的配准性能。大量结果表明:所提方法与其他方法相比,在配准精度上具有竞争优势,并且大大减少了计算时间。 

关 键 词:深度学习   无监督   医学图像配准   多模态   计算机视觉
收稿时间:2020-08-24

Multimodal deformable registration based on unsupervised learning
MA Tengyu,LI Zi,LIU Risheng,FAN Xin,LUO Zhongxuan. Multimodal deformable registration based on unsupervised learning[J]. Journal of Beijing University of Aeronautics and Astronautics, 2021, 47(3): 658-664. DOI: 10.13700/j.bh.1001-5965.2020.0449
Authors:MA Tengyu  LI Zi  LIU Risheng  FAN Xin  LUO Zhongxuan
Affiliation:International School of Information Science & Engineering, Dalian University of Technology, Dalian 116000, China
Abstract:Multimodal deformable registration is designed to solve dense spatial transformations and is used to align images of two different modalities. It is a key issue in many medical image analysis applications. Multimodal image registration based on traditional methods aims to solve the optimization problem of each pair of images, and usually achieves excellent registration performance, but the calculation cost is high and the running time is long. The deep learning method greatly reduces the running time by learning the network used to perform registration. These learning-based methods are very effective for single-modality registration. However, the intensity distribution of different modal images is unknown and complex. Most existing methods rely heavily on label data. Faced with these challenges, this paper proposes a deep multimodal registration framework based on unsupervised learning. Specifically, the framework consists of feature learning based on matching amount and deformation field learning based on maximum posterior probability, and realizes unsupervised training by means of spatial conversion function and differentiable mutual information loss function. In the 3D image registration tasks of MRI T1, MRI T2 and CT, the proposed method is compared with the existing advanced multi-modal registration methods. In addition, the registration performance of the proposed method is demonstrated on the latest COVID-19 CT data. A large number of results show that the proposed method has a competitive advantage in registration accuracy compared with other methods, and greatly reduces the calculation time. 
Keywords:
本文献已被 CNKI 万方数据 等数据库收录!
点击此处可从《北京航空航天大学学报》浏览原始摘要信息
点击此处可从《北京航空航天大学学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号