首页 | 本学科首页   官方微博 | 高级检索  
     检索      

二值卷积神经网络综述
引用本文:丁文锐,刘春蕾,李越,张宝昌.二值卷积神经网络综述[J].航空学报,2021,42(6):24504-024504.
作者姓名:丁文锐  刘春蕾  李越  张宝昌
作者单位:1. 北京航空航天大学 无人系统研究院, 北京 100083;2. 北京航空航天大学 电子信息工程学院, 北京 100083;3. 北京航空航天大学 自动化科学与电气工程学院, 北京 100083
基金项目:国家自然科学基金企业创新发展联合基金(U20B2042);国家自然科学基金(62076019);国家重点研发计划"新一代人工智能专项(2030)"(2020AAA0108200)
摘    要:二值卷积神经网络(BNN)占用存储空间小、计算效率高,然而由于网络前向的二值量化与反向梯度的不匹配问题,使其与同结构的全精度深度卷积神经网络(CNN)之间存在较大的性能差距,影响了其在资源受限平台上的部署。至今,研究者已提出了一系列网络设计与训练方法来降低卷积神经网络在二值化过程中的性能损失,以推动二值卷积神经网络在嵌入式便携设备发展中的应用。因此,本文对二值卷积神经网络进行综述,主要从提高网络表达能力与充分挖掘网络训练潜力两大方面,给出了当前二值卷积神经网络的发展脉络与研究现状。具体而言,提高网络表达能力分为二值化量化方法设计、结构设计两方面,充分挖掘网络训练潜力分为损失函数设计与训练策略两方面。最后,对二值卷积神经网络在不同任务与硬件平台的实验情况进行了总结和技术分析,并展望了未来研究中可能面临的挑战。

关 键 词:二值卷积神经网络  全精度卷积神经网络  二值化  量化  模型压缩  轻量化  深度学习  
收稿时间:2020-07-07
修稿时间:2020-08-03

Binary convolutional neural network: Review
DING Wenrui,LIU Chunlei,LI Yue,ZHANG Baochang.Binary convolutional neural network: Review[J].Acta Aeronautica et Astronautica Sinica,2021,42(6):24504-024504.
Authors:DING Wenrui  LIU Chunlei  LI Yue  ZHANG Baochang
Institution:1. Unmanned System Research Institute, Beihang University, Beijing 100083, China;2. School of Electronic and Information Engineering, Beihang University, Beijing 100083, China;3. School of Automation Science and Electrical Engineering, Beihang University, Beijing 100083, China
Abstract:In recent years, Binary Convolutional Neural Networks (BNNs) have attracted much attention owing to their low storage and high computational efficiency. However, the mismatch between forward and backward quantization results in a huge performance gap between the BNN and the full-precision convolutional neural network, affecting the deployment of the BNN on resource-constrained platforms. Researchers have proposed a series of algorithms and training methods to reduce the performance gap during the binarization process, thereby promoting the application of BNNs to embedded portable devices. This paper makes a comprehensive review of BNNs, mainly from the perspectives of improving network representative capabilities and fully exploring the network training potential. Specifically, improving network representative capabilities includes the design of the binary quantization method and structure design, while fully exploring the network training potential involves loss function design and the training strategy. Finally, we discuss the performance of BNNs in different tasks and hardware platforms, and summarize the challenges in future research.
Keywords:binary convolutional neural networks  full-precision convolutional neural networks  binarization  quantization  model compression  lightweight  deep learning  
点击此处可从《航空学报》浏览原始摘要信息
点击此处可从《航空学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号