基于忆阻突触器件的硬件神经网络是神经形态计算的重要发展方向,是后摩尔时代突破传统冯·诺依曼计算架构的有力技术候选。综述了国内外忆阻硬件神经网络的近期发展现状,从器件发展和神经网络两个方面,详细阐述了忆阻器这一新兴信息器件在神经形态计算中所发挥的角色作用,讨论了依然存在的关键问题和技术挑战。忆阻器为实现存算一体化架构和超越摩尔定律提供了技术障碍突破的可行方案。
引 言
在当今数据量爆炸式增长的背景下,传统计算架构遭遇冯·诺依曼瓶颈,晶体管微缩,摩尔定律已难以延续,这已成为继续提升计算系统性能过程中难以克服的技术障碍[1-4]。神经形态计算概念的提出无疑是可以实现技术突破的一大曙光,人脑信息处理系统的复杂程度是最先进的超级计算机也无法媲美的。在已报道的神经形态计算架构芯片中,其计算能力显著提高,并且体积和能耗远小得多。因此,神经形态计算架构的发展在软件和硬件领域都被极度重视,有望替换当前计算系统架构。
1.神经形态计算与忆阻器件
1.1 神经形态计算





2.技术发展现状
3.基于忆阻器件的神经网络
人工网络的发展源自于1943年McCulloch和Pitts[29]提出的首个用建模描述大脑信息处理过程的M-P神经元模型,进而于1949年,Hebb[30]提出了一种突触是联系可变的假设,促进了神经网络的学习算法的研究,直到1957年 Rosenblat[31]提出了感知机模型,它被称为是首个比较完整的人工神经网络,并且首次把神经网络的研究应用在实际工程中。至此,关于人工神经网络的相关研究进入了热潮。




图6.在1T1R阵列中实现人脸识别任务





图9.利用HfO2忆阻器实现卷积神经网络









4.总结
[1] MOORE G. Moore’s law[J]. Cramming more components onto integrated circuits[J]. Electronics, 1965, 38(8): 114-117.
[2] HOLT W M. 1.1 Moore’s law: a path going forward[C]//2016 IEEE International Solid-State Circuits Conference(ISSCC). IEEE, 2016: 8-13.
[3] JONES V F R. A polynomial invariant for knots via vonNeumann algebras[M]. New Developments in The Theory of Knots, 1985.
[4] BACKUS J W. Can programming be liberated from thevon Neumann style? A functional style and its algebra ofprograms[J]. Communications of the ACM, 1978, 21(8):613-641.
[5] CHUA L. Memristor-the missing circuit element[J].IEEE Transactions on Circuit Theory, 1971, 18(5): 507-519.
[6] STRUKOV D B, SNIDER G S, STEWART D R, et al.The missing memristor found[J]. Nature, 2008, 453(7191): 80-83.
[7] JO S H, CHANG T, EBONG I, et al. Nanoscale mersister device as synapse in neuromorphic systems[J]. NanoLetters, 2010, 10(4): 1297-1301.
[8] CHEN J, LIN C Y, LI Y, et al. LiSiOx- based analogmemristive synapse for neuromorphic computing[J].IEEE Electron Device Letters, 2019, 40(4): 542-545.
[9] AMBROGIO S, NARAYANAN P, TASI H, et al. Equivalent- accuracy accelerated neural- network training usinganalogue memory[J]. Nature, 2018, 558(7708): 60-67.
[10] JERRY M, CHEN P Y, ZHANG J, et al. FerroelectricFET analog synapse for acceleration of deep neural network training[C]// 2017 IEEE International Electron Devices Meeting (IEDM). IEEE, 2017: 6.2.1-6.2.4.
[11] WU M H, HONG M C, CHANG C C, et al. Extremelycompact integrate-and-fire STT-MRAM neuron: A pathway toward all- spin artificial deep neural network[C]//2019 Symposium on VLSI Technology. IEEE, 2019: T34-T35.
[12] LI Y, ZHONG Y, ZHANG J, et al. Activity- dependentsynaptic plasticity of a chalcogenide electronic synapsefor neuromorphic systems[J]. Scientific Reports, 2014, 4 (6184): 4906.
[13] CHEN P Y, PENG X, YU S. NeuroSim: A circuit-levelmacro model for benchmarking neuro-inspired architectures in online learning[J]. IEEE Transactions on Computer- Aided Design of Integrated Circuits and Systems,2018, 37(12): 3067-3080.
[14] DEGUCHI Y, MAEDA K, SUZUKI S, et al. Error-reduction controller techniques of TaOx- based ReRAM fordeep neural networks to extend data- retention lifetimeby over 1700x[C]// 2018 IEEE International MemoryWorkshop (IMW). IEEE, 2018: 1-4.
[15] DIEHL P U, COOK M. Unsupervised learning of digitrecognition using spike- timing- dependent plasticity[J].Frontiers in Computational Neuroscience, 2015, 9(429):99.
[16] GOKMEN T, ONEN M, WILFRIED H. Training deepconvolutional neural networks with resistive cross- pointdevices[J]. Frontiers in Neuroscience, 2017, 11: 538.
[17] GOKMEN T, VLASOV Y. Acceleration of deep neuralnetwork training with resistive cross- point devices: design considerations[J]. Frontiers in Neuroscience, 2016,10(51): 333.
[18] ESSER S K, MEROLLA P A, ARTHUR J V, et al. Convolutional networks for fast energy- efficient neuromorphic computing[J]. Proceedings of the National Academy of Sciences, 2016, 113(41): 11441-11446.
[19] CHEN L, LI J, CHEN Y, et al. Accelerator-friendly neural- network training: learning variations and defects inRRAM crossbar[C]// 2017 Design, Automation & Testin Europe Conference & Exhibition (DATE). IEEE,2017: 19-24.
[20] CHANG C C, LIU J C, SHEN Y L, et al. Challengesand opportunities toward online training acceleration using RRAM- based hardware neural network[C]// 2017IEEE International Electron Devices Meeting (IEDM).IEEE, 2017: 11.6.1-11.6.4.
[21] TRUONG S N, MIN K S. New memristor-based crossbar array architecture with 50-% area reduction and48-% power saving for matrix- vector multiplication ofanalog neuromorphic computing[J]. Journal of Semiconductor Technology and Science, 2014, 14(3): 356-363.
[22] CHOI S, SHIN J H, LEE J, et al. Experimental demonstration of feature extraction and dimensionality reduction using memristor networks[J]. Nano Letters, 2017, 17(5): 3113-3118.
[23] HU M, GRAVES C E, LI C, et al. Memristor‐based analog computation and neural network classification with adot product engine[J]. Advanced Materials, 2018, 30(9):1705914.
[24] NURSE E, MASHFORD B S, YEPES A J, et al. Decoding EEG and LFP signals using deep learning: headingTrueNorth[C]// Proceedings of the ACM InternationalConference on Computing Frontiers. ACM, 2016: 259-266.
[25] LUO T, LIU S, LI L, et al. Dadiannao: A neural networksupercomputer[J]. IEEE Transactions on Computers,2016, 66(1): 73-88.
[26] PEI J, DENG L, SONG S, et al. Towards artificial general intelligence with hybrid Tianjic chip architecture[J].Nature, 2019, 572: 106-111.
[27] CAI F, CORRELL J, LEE S H, et al. A fully integratedreprogrammable memristor – CMOS system for efficientmultiply – accumulate operations[J]. Nature Electronics,2019, 2(7): 290-299.
[28] CHEN W H, DOU C, LI K X, et al. CMOS-integratedmemristive non- volatile computing- in- memory for AIedge processors[J]. Nature Electronics, 2019, 2: 1-9.
[29] MCCULLOCH W S, PITTS W. A logical calculus of theideas immanent in nervous activity[J]. The Bulletin ofMathematical Biophysics, 1943, 5(4): 115-133.
[30] HEBB D O. The organization of behavior: a neuropsychological theory[J]. American Journal of Physical Medicine & Rehabilitation, 2013, 30(1): 74-76.
[31] ROSENBLATT F. The perceptron: a probabilistic modelfor information storage and organization in the brain[J].Psychological Review, 1958, 65(6): 386-408.
[32] WANG Z R, JOSHI S, SAVEL’EV S, et al. Fully memristive neural networks for pattern classification with unsupervised learning[J]. Nature Electronics, 2018, 1(2):137-145.
[33] CAI F, CORRELL J, LEE S H, et al. A fully integratedreprogrammable memristor – CMOS system for efficientmultiply – accumulate operations[J]. Nature Electronics,2019, 2(7): 290-299.
[34] IELMINI D, AMBROGIO S, MILO V, et al. Neuromorphic computing with hybrid memristive/CMOS synapsesfor real-time learning[C]// 2016 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2016:1386-1389.
[35] CHEN P Y, YU S. Partition SRAM and RRAM basedsynaptic arrays for neuro- inspired computing[C]// 2016IEEE International Symposium on Circuits and Systems(ISCAS). IEEE, 2016: 2310-2313.
[36] KIM S G, HAN J S, KIM H, et al. Recent advances inmemristive materials for artificial synapses[J]. AdvancedMaterials Technologies, 2018, 3(12): 1800457.
[37] TSAI H, AMBROGIO S, NARAYANAN P, et al. Recentprogress in analog memory- based accelerators for deeplearning[J]. Journal of Physics D: Applied Physics,2018, 51(28): 283001.
[38] SUNG C, HWANG H, YOO I K. Perspective: a reviewon memristive hardware for neuromorphic computation[J]. Journal of Applied Physics, 2018, 124(15): 151903.
[39] CRISTIANO G, GIORDANO M, AMBROGIO S, et al.Perspective on training fully connected networks with resistive memories: device requirements for multiple conductances of varying significance[J]. Journal of AppliedPhysics, 2018, 124(15): 151901.
[40] LI C, BELKIN D, LI Y, et al. Efficient and self-adaptivein-situ learning in multilayer memristor neural networks[J]. Nature Communications, 2018, 9(1): 2385.
[41] YAO P, WU H, GAO B, et al. Face classification usingelectronic synapses[J]. Nature Communications, 2017, 8:15199.
[42] BAYAT F M, PREZIOSO M, CHAKRABARTI B, et al.Implementation of multilayer perceptron network withhighly uniform passive memristive crossbar circuits[J].Nature Communications, 2018, 9(1): 2331.
[43] KWAK M, PARK J, WOO J, et al. Implementation ofconvolutional kernel function using 3- D TiOx resistiveswitching devices for image processing[J]. IEEE Transactions on Electron Devices, 2018, 65(10): 4716-4718.
[44] YAKOPCIC C, ALOM M Z, TAHA T M. Memristorcrossbar deep network implementation based on a convolutional neural network[C]// 2016 International JointConference on Neural Networks (IJCNN). IEEE, 2016:963-970.
[45] GARBIN D, VIANELLO E, BICHLER O, et al. HfO2-based OxRAM devices as synapses for convolutionalneural networks[J]. IEEE Transactions on Electron Devices, 2015, 62(8): 2494-2501.
[46] GOKMEN T, ONEN M, HAENSCH W, et al. Trainingdeep convolutional neural networks with resistive crosspoint devices[J]. Frontiers in Neuroscience, 2017, 11:538.
[47] ZHOU Z, HUANG P, XIANG Y C, et al. A new hardware implementation approach of BNNs based on nonlinear 2T2R synaptic cell[C]// 2018 IEEE InternationalElectron Devices Meeting (IEDM). IEEE, 2018: 20.7.1-20.7.4.
[48] SUN X, YIN S, PENG X, et al. XNOR-RRAM: a scalable and parallel resistive synaptic architecture for binary neural networks[C]// 2018 Design, Automation &Test in Europe Conference & Exhibition (DATE). IEEE,2018: 1423-1428.
[49] LI C, WANG Z, RAO M, et al. Long short-term memory networks in memristor crossbar arrays[J]. Nature Machine Intelligence, 2019, 1(1): 49-57.
文献引用:
陈佳,潘文谦,秦一凡,等. 基于忆阻器的神经网络应用研究[J]. 微纳电子与智能制造, 2019, 1(4): 24-38.
CHEN Jia, PANWenqian, QIN Yifan, et al. Research of neural network based on memristor[J]. Micro/nano Electronics and Intelligent Manufacturing, 2019, 1(4): 24-38.
《微纳电子与智能制造》刊号:CN10-1594/TN
主管单位:北京电子控股有限责任公司
主办单位:北京市电子科技科技情报研究所
IEEE Spectrum
《科技纵览》
官方微信公众平台

