[1]CAO Ronghui,TANG Zhuo,ZUO Zhiwei,et al.Key technologies and applications of distributed parallel computing for machine learning[J].CAAI Transactions on Intelligent Systems,2021,16(5):919-930.[doi:10.11992/tis.202108010]
Copy

Key technologies and applications of distributed parallel computing for machine learning

References:
[1] LU Chienping. Native supercomputing and the revival of Moore’s law[J]. APSIPA transactions on signal and information processing, 2017, 6:1–17.
[2] DOHERR D. Supercomputing of tomorrow artificial intelligence in a smarter world[C]// International New York Conference on Social Sciences. New York, USA, 2017:1–4.
[3] SHUKUR H, ZEEBAREE S R M, AHMED A J, et al. A state of art survey for concurrent computation and clustering of parallel computing for distributed systems[J]. Journal of applied science and technology trends, 2020, 1(4): 148–154.
[4] CICIRELLI F, GIORDANO A, MASTROIANNI C. Analysis of global and local synchronization in parallel computing[J]. IEEE transactions on parallel and distributed systems, 2020, 32(5): 988–1000.
[5] LU Yuqian, XU Xun, WANG Lihui. Smart manufacturing process and system automation–a critical review of the standards and envisioned scenarios[J]. Journal of manufacturing systems, 2020, 56: 312–325.
[6] LIU Qiang, LENG Jiewu, YAN Douxi, et al. Digital twin-based designing of the configuration, motion, control, and optimization model of a flow-type smart manufacturing system[J]. Journal of manufacturing systems, 2021, 58: 52–64.
[7] KIRIMTAT A, KREJCAR O, KERTESZ A, et al. Future trends and current state of smart city concepts:a survey[J]. IEEE access, 2020, 8: 86448–86467.
[8] LI Kenli Li, LIU Chubo, LI Keqin, et al. A framework of price bidding configurations for resource usage in cloud computing[J]. IEEE transactions on parallel & distributed systems, 2016, 27(8):2168–2181.
[9] ZHONG Jianlong, HE Bingsheng. Medusa: simplified graph processing on GPUs[J]. IEEE transactions on parallel and distributed systems, 2014, 25(6): 1543–1552.
[10] WU Ren, ZHANG Bin, HSU M. GPU-accelerated large scale analytics[EB/OL]. (2009-03-06). http://www.hpl.hp.com/techreports/2009/HPL-2009-38.pdf.
[11] PONCE S P. Towards algorithm transformation for temporal data mining on GPU[D]. Virginia, USA: Virginia Polytechnic Institute and State University, 2009: 805–816.
[12] RAHUL K, BANYAL R K, GOSWAMI P. Analysis and processing aspects of data in big data applications[J]. Journal of discrete mathematical sciences and cryptography, 2020, 23(2): 385–393.
[13] WOLFF J G. The potential of the SP system in machine learning and data analysis for image processing[J]. Big data and cognitive computing, 2021, 5(1): 1–15.
[14] ZHANG Yongpeng, MUELLER F, CUI Xiaohui, et al. GPU-accelerated text mining[C]//Workshop on Exploiting Parallelism Using GPUs and Other Hardware-assisted Methods. Seattle, USA, 2009: 1–6.
[15] SCHATZ M C, TRAPNELL C. Fast exact string matching on the GPU[J]. Center for bioinformatics and computational biology, 2013:1–6.
[16] SCHATZ M C, TRAPNELL C, DELCHER A L, et al. High-throughput sequence alignment using Graphics Processing Units[J]. BMC bioinformatics, 2007, 8(1):1–10.
[17] HE Bingsheng, FANG Wenbin, LUO Qiong, et al. Mars: a MapReduce framework on graphics processors[C]//Proceedings of the 17th International Conference on Parallel Architectures and Compilation Techniques. Toronto, Canada, 2008: 260–269.
[18] MOOLEY A, MURTHY K, SINGH H . DisMaRC: a distributed map reduce framework on CUDA[EB/OL].http://www.cs.utexas.edu/~karthikm/dismarc.pdf(2019).
[19] KAGERMANN H, WAHLSTER W, HELBIG J. Recommendations for implementing the strategic initiative INDUSTRIE 4.0-Securing the future of German manufacturing industry[J]. Final report of the industrie, 2013, 4: 213?220.
[20] 孙家广. 工业大数据[J]. 软件和集成电路, 2016(8):22–23.
SUN Jiaguang. Industrial big data[J].Software and integrated circuit, 2016(8):22–23.
[21] WANG D, LIU Jun, SRINIVASAN R. Data-driven soft sensor approach for quality prediction in a refining process[J]. IEEE transactions on industrial informatics, 2010, 6(1): 11–17.
[22] WAN Jiafu, TANG Shenglong, LI Di, et al. A manufacturing big data solution for active preventive maintenance[J]. IEEE transactions on industrial informatics, 2017, 13(4): 2039–2047.
[23] HONG Sumin, CHOI W, JEONG W K. GPU in-memory processing using spark for iterative computation[C]//2017 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing. Madrid, Spain, 2017: 31–41.
[24] JI Feng, MA Xiaosong. Using shared memory to accelerate MapReduce on graphics processing units[C]//International Parallel & Distributed Processing Symposium. Anchorage, USA, 2011: 805–816.
[25] FANG Wenbin, HE Bingsheng, LUO Qiong, et al. Mars: accelerating MapReduce with graphics processors[J]. IEEE transactions on parallel and distributed systems, 2011, 22(4): 608–620.
[26] STUART J A, OWENS J D. Multi-GPU MapReduce on GPU clusters[C]//2011 IEEE International Parallel & Distributed Processing Symposium. Anchorage, USA, 2011: 1068–1079.
[27] ABBASI A, KHUNJUSH F, AZIMI R. A preliminary study of incorporating GPUs in the Hadoop framework[C]//The 16th CSI International Symposium on Computer Architecture and Digital Systems. Shiraz, Iran, 2012: 178–185.
[28] GROSSMAN M, BRETERNITZ M, SARKAR V. HadoopCL: MapReduce on distributed heterogeneous platforms through seamless integration of Hadoop and OpenCL[C]//IEEE International Symposium on Parallel and Distributed Processing, Workshops and PhD Forum. Cambridge, USA, 2013: 1918–1927.
[29] ZHU Jie, LI Juanjuan, HARDESTY E, et al. GPU-in-Hadoop: enabling MapReduce across distributed heterogeneous platforms[C]//2014 IEEE/ACIS 13th International Conference on Computer and Information Science (ICIS). Taiyuan, China, 2014: 321–326.
[30] LI Peilong, LUO Yan, ZHANG Ning, et al. HeteroSpark: a heterogeneous CPU/GPU Spark platform for machine learning algorithms[C]//IEEE International Conference on Networking, Architecture and Storage (NAS). Boston, USA, 2015: 347–348.
[31] MANZI D, TOMPKINS D. Exploring GPU acceleration of apache spark[C]//IEEE International Conference on Cloud Engineering (IC2E). Berlin, Germany, 2016: 222–223.
[32] CHOI W, HONG Sumin, JEONG W K. Vispark: GPU-accelerated distributed visual computing using spark[J]. SIAM journal on scientific computing, 2016, 38(5): S700–S719.
[33] ELTEIR M, LIN Heshan, FENG Wuchun, et al. StreamMR: an optimized MapReduce framework for AMD GPUs[C]//17th International Conference on Parallel and Distributed Systems. Tainan, China, 2011: 364–371.
[34] EWEN S, TZOUMAS K, KAUFMANN M, et al. Spinning fast iterative data flows[J]. Proceedings of the VLDB endowment, 2012, 5(11): 1268–1279.
[35] KIM K S, CHOI Y S. Incremental iteration method for fast PageRank computation[C]//Proceedings of the 9th International Conference on Ubiquitous Information Management and Communication. Bali, Indonesia, 2015: 80.
[36] LOW Y, BICKSON D, GONZALEZ J, et al. Distributed GraphLab: a framework for machine learning and data mining in the cloud[J]. Proceedings of the VLDB endowment, 2012, 5(8): 716–727.
[37] XU Yu, KOSTAMAA P. Efficient outer join data skew handling in parallel DBMS[J]. Proceedings of the VLDB endowment, 2009, 2(2): 1390–1396.
[38] TAN Jian, MENG Shicong, MENG Xiaoqiao, et al. Improving reducetask data locality for sequential MapReduce jobs[C]//Proceedings IEEE INFOCOM. Turin, Italy, 2013: 1627–1635.
[39] KWON Y C, BALAZINSKA M, HOWE B, et al. SkewTune: mitigating skew in mapreduce applications[C]//Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data. Scottsdale, USA, 2012: 25–36.
[40] CHEN Qi, LIU Cheng, XIAO Zhen. Improving MapReduce performance using smart speculative execution strategy[J]. IEEE transactions on computers, 2014, 63(4): 954–967.
[41] CHEN Qi, YAO Jinyu, XIAO Zhen. LIBRA: lightweight data skew mitigation in MapReduce[J]. IEEE transactions on parallel and distributed systems, 2015, 26(9): 2520–2533.
[42] LE Yanfang, LIU Jiangchuan, ERGüN F, et al. Online load balancing for MapReduce with skewed data input[C]//IEEE Conference on Computer Communications. Toronto, Canada, 2014: 2004–2012.
[43] RAMAKRISHNAN S R, SWART G, URMANOV A. Balancing reducer skew in MapReduce workloads using progressive sampling[C]//Proceedings of the Third ACM Symposium on Cloud Computing. San Jose, USA, 2012: 16: 621?633.
[44] GIRSHICK R. Fast R-CNN[C]//2015 IEEE International Conference on Computer Vision. Santiago, Chile, 2015: 1440–1448.
[45] NARODYTSKA N, KASIVISWANATHAN S. Simple black-box adversarial attacks on deep neural networks[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu, USA, 2017: 1310–1318.
[46] PAPERNOT N, MCDANIEL P, WU Xi, et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]//2016 IEEE Symposium on Security and Privacy. San Jose, USA, 2016: 582–597.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems