字符串 ') and Issue_No=(select Issue_No from OA where Script_ID=@Script_ID) order by ID ' 后的引号不完整。 ') and Issue_No=(select Issue_No from OA where Script_ID=@Script_ID) order by ID ' 附近有语法错误。 PG-RNN:一种基于递归神经网络的密码猜测模型-《智能系统学报》

 TENG Nanjun,LU Huaxiang,JIN Min,et al.PG-RNN: a password-guessing model based on recurrent neural networks[J].CAAI Transactions on Intelligent Systems,2018,13(06):889-896.[doi:10.11992/tis.201712006]





PG-RNN: a password-guessing model based on recurrent neural networks
滕南君12 鲁华祥134 金敏1 叶俊彬12 李志远12
1. 中国科学院 半导体研究所, 北京 100083;
2. 中国科学院大学, 北京 100089;
3. 中国科学院 脑科学与智能技术卓越创新中心, 上海 200031;
4. 半导体神经网络智能感知与计算技术北京市重点实验室, 北京 100083
TENG Nanjun12 LU Huaxiang134 JIN Min1 YE Junbin12 LI Zhiyuan12
1. Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China;
2. University of Chinese Academy of Sciences, Beijing 100089, China;
3. Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China;
4. Semiconductor Neural Network Intelligent Perception and Computing Technology Beijing Key Lab, Beijing 100083, China
password generationdeep learningrecurrent neural networksMarkovpassword guessing
用户名—密码(口令)是目前最流行的用户身份认证方式,鉴于获取真实的大规模密码明文非常困难,利用密码猜测技术来生成大规模密码集,可以评估密码猜测算法效率、检测现有用户密码保护机制的缺陷等,是研究密码安全性的主要方法。本文提出了一种基于递归神经网络的密码猜测概率模型(password guessing RNN, PG-RNN),区别于传统的基于人为设计规则的密码生成方法,递归神经网络能够自动地学习到密码集本身的分布特征和字符规律。因此,在泄露的真实用户密码集上训练后的递归神经网络,能够生成非常接近训练集真实数据的密码,避免了人为设定规则来破译密码的局限性。实验结果表明,PG-RNN生成的密码在结构字符类型、密码长度分布上比Markov模型更好地接近原始训练数据的分布特征,同时在真实密码匹配度上,本文提出的PG-RNN模型比目前较好的基于生成对抗网络的PassGAN模型提高了1.2%。
Passwords are the most popular way of user ID authentication. However, it is rather difficult to obtain large-scale real text passwords. Generating large-scale password sets based on password-guessing techniques is a principal method to research password security, which can be applied to evaluate the efficiency of password-guessing algorithm and detect the defects of existing user-password protective mechanisms. In this paper, we propose a password guessing-based recurrent neural network (PG-RNN) model. Our model can directly and automatically infer the distribution characteristics and character rules from the data of password sets, which is different from the traditional password generating method based on manual design rule. Therefore, an RNN model that has been trained on a disclosed real user password set can generate passwords very close to the real data of the training set, which avoids the limitations of manual setting for password guessing. The results of our experiments show that PG-RNN can generate passwords closer to primitive data distribution more than Markov in password length and character structure categories. When evaluating on large password dataset, the proposed PG-RNN model matching outperforms that of PassGAN, which is based on generative adversarial networks, by more than 1.2%.


[1] CASTELLUCCIA C, DÖRMUTH M, PERITO D, et al. Adaptive password-strength meters from markov models[C]//Proceedings of the 19th Network & Distributed System Security Symposium. San Diego, United States, 2012.
[2] HASHCAT[EB/OL].[2017-10-12]. https://hashcat.net.
[3] John the Ripper password cracker[EB/OL].[2017-10-15]. http://www.openwall.com/john/.
[4] WEIR M, AGGARWAL S, DE MEDEIROS B, et al. Password cracking using probabilistic context-free grammars[C]//Proceedings of the 30th IEEE Symposium on Security and Privacy. Berkeley, USA, 2009:391-405.
[5] 韩伟力, 袁琅, 李思斯, 等. 一种基于样本的模拟口令集生成算法[J]. 计算机学报, 2017, 40(5):1151-1167 HAN Weili, YUAN Lang, LI Sisi, et al. An efficient algorithm to generate password sets based on samples[J]. Chinese journal of computers, 2017, 40(5):1151-1167
[6] MA J, YANG Weining, LUO Min, et al. A study of probabilistic password models[C]//Proceedings of 2014 IEEE Symposium on Security and Privacy. San Jose, USA, 2014:689-704.
[7] AMICO M D, MICHIARDI P, ROUDIER Y, et al. Password strength:an empirical analysis[C]//Proceedings of 2010 IEEE INFOCOM. San Diego, USA, 2010:1-9.
[8] GRAVES A. Generating sequences with recurrent neural networks[J]. Computer science, arXiv:1308. 0850, 2013.
[9] SUTSKEVER I, MARTENS J, HINTON G E, et al. Generating text with recurrent neural networks[C]//Proceedings of the 28th International Conference on Machine Learning. Bellevue, USA, 2011:1017-1024.
[10] Using neural networks for password cracking[OL/EB].[2017-10-15]. https://0day.work/using-neural-networks-for-password-cracking/.
[11] HITAJ B, GASTI P, ATENIESE G, et al. PassGAN:a deep learning approach for password guessing[J]. arXiv:1709.00440, 2017.
[12] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada, 2014:2672-2680.
[13] MELICHER W, UR B, SEGRETI S M, et al. Fast, lean, and accurate:modeling password guessability using neural networks[C]//Proceedings of the 23rd USENIX Security Symposium. Austin, USA, 2016:175-191.
[14] HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural computation, 1997, 9(8):1735-1780.
[15] CHUNG J, GULCEHRE C, CHO K, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling[J]. arXiv:1412.3555, 2014.
[16] KOLEN J, KREMER S. Gradient flow in recurrent nets:the difficulty of learning LongTerm dependencies[M].[S.l.]:Wiley-IEEE Press, 2001.
[17] BENGIO Y, SIMARD P, FRASCONI P. Learning long-term dependencies with gradient descent is difficult[J]. IEEE transactions on neural networks, 1994, 5(2):157-166.
[18] ROCKYOU[OL/EB].[2017-10-13]. http://downloads.skullsecurity.org/passwords/rockyou.txt.bz2.
[19] YAHOO. Hackers expose 453, 000 credentials allegedly taken from Yahoo service (Updated)[EB/OL].[2012-07-12]. http://arstechnica.com/security/2012/07/yahoo-service-hacked/.
[20] MYSPACE. Information of 427 million MySpace accounts leaked, selling as a package at the price of 2800 dollars in black market[EB/OL].[2016-06-08]. https://www.wosign.com/english/News/myspace.html.


 ZHANG Yuanyuan,HUO Jing,YANG Wanqi,et al.A deep belief network-based heterogeneous face verification method for the second-generation identity card[J].CAAI Transactions on Intelligent Systems,2015,10(06):193.[doi:10.3969/j.issn.1673-4785.201405060]
 DING Ke,TAN Ying.A review on general purpose computing on GPUs and its applications in computational intelligence[J].CAAI Transactions on Intelligent Systems,2015,10(06):1.[doi:10.3969/j.issn.1673-4785.201403072]
 MA Xiao,ZHANG Fandong,FENG Jufu.Sparse representation via deep learning features based face recognition method[J].CAAI Transactions on Intelligent Systems,2016,11(06):279.[doi:10.11992/tis.201603026]
 LIU Shuaishi,CHENG Xi,GUO Wenyan,et al.Progress report on new research in deep learning[J].CAAI Transactions on Intelligent Systems,2016,11(06):567.[doi:10.11992/tis.201511028]
 MA Shilong,WUNIRI Qiqige,LI Xiaoping.Deep learning with big data: state of the art and development[J].CAAI Transactions on Intelligent Systems,2016,11(06):728.[doi:10.11992/tis.201611021]
 WANG Yajie,QIU Hongkun,WU Yanyan,et al.Research and development of computer games[J].CAAI Transactions on Intelligent Systems,2016,11(06):788.[doi:10.11992/tis.201609006]
 HUANG Xinhan.A3I: the star of science and technology for the 21st century[J].CAAI Transactions on Intelligent Systems,2016,11(06):835.[doi:10.11992/tis.201605022]
 SONG Wanru,ZHAO Qingqing,CHEN Changhong,et al.Survey on pedestrian re-identification research[J].CAAI Transactions on Intelligent Systems,2017,12(06):770.[doi:10.11992/tis.201706084]
 YANG Mengduo,LUAN Yonghong,LIU Wenjun,et al.Feature transfer algorithm based on an auto-encoder[J].CAAI Transactions on Intelligent Systems,2017,12(06):894.[doi:10.11992/tis.201706037]
 WANG Kejun,ZHAO Yandong,XING Xianglei.Deep learning in driverless vehicles[J].CAAI Transactions on Intelligent Systems,2018,13(06):55.[doi:10.11992/tis.201609029]


更新日期/Last Update: 2018-12-25