Mobile QR Code QR CODE


Deng Z., Xu C., Ci Q., Faraboschi P., , Reduced-precision memory value approximation for deep learning, [Online]. Available: http://www. Search
Liu J., Jaiyen B., Veras R., Mutlu O., 2012, RAIDR: Retention-aware intelligent DRAM refresh, in Proc 39th Annu Int Symp Comput Archit, pp. 1-12DOI
Bhati I., Chishti Z., Lu S., Jacob B., 2015, Flexible auto-refresh: Enabling scalable and energy-efficient DRAM refresh reductions, in Proc 42nd Annu Int Symp Comput Archit, pp. 235-246DOI
Raha A., Sutar S., Jayakumar H., Raghunathan V., July 2017, Quality configurable approximate DRAM, IEEE Trans. Comput., Vol. 66, No. 7, pp. 1172-1187DOI
Liu S., Pattabiraman K., Moscibroda T., Zorn and B. G., 2011, Flikker: Saving DRAM Refresh-power through Critical Data Partitioning, in Proc. 16th Int. Conf. Archit. Support Program. Languages Operating Syst., pp. 213-224DOI
Lucas J., Alvarez-Mesa M., Andersch M., Juurlink B., 2014, Sparkk: Quality-Scalable Approximate Storage in DRAM, in Memory ForumGoogle Search
Nguyen D. T., Kim H., Lee H.-J., Chang I.-J., 2018, An approximate memory architecture for a reduction of refresh power consumption in deep learning applications, in Proc. IEEE Int. Symp. Circuit Syst., pp. 1-5DOI
Liu J., Jaiyen B., Kim Y., Wilkerson C., Mutlu O., 2013, An experimental study of data retention behavior in modern DRAM devices: Implications for retention time profiling mechanisms, in Proc. 40th Int. Symp. Comput. Archit., pp. 60-71DOI
1Gb Mobile LPDDR , Micron Technology , , [Online]. Available: \href{}{ c910-f3a6-4aa4-9fe7-0265026cf1d3}.Google Search
Courbariaux M., David J., Bengio Y., , Training deep neural networks with low precision multiplications, [Online]. Available: arXiv: 1412. 7024.Google Search
Gupta S., Agrawal A., Gopalakrishnan K., Narayanan P., 2015, Deep learning with limited numerical precision, in Proc. Int. Conf. Mach. Learn, pp. 1737-1746Google Search
Whitehead N., Fit-florea A., , Precision & Performance: Floating Point and IEEE 754 Compliance for NVIDIA GPUs, [Online]. Available: summary?doi= Search
Samson , 2011, EnerJ: Approximate data types for safe and general low-power computation, in Proc. 32nd ACM SIGPLAN Conf. Program. Language Des. Implementation, pp. 164-174DOI
Luk C. K., 2005, Pin: Building Customized Program Analysis Tools with Dynamic Instrumentation, in Proc. ACM SIGPLAN Conf. Prog. Lang. Des. Imp., pp. 190-200DOI
Simonyan K., Zisserman A., , Very deep convolutional networks for large-scale image recognition, [Online]. Available: arXiv:1409.1556.Google Search
Szegedy , 2015, Going deeper with convolutions, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1-9Google Search
Bick K., Nguyen D. T., Lee H. -J., Kim H., 2018, Fast and Accurate Memory Simulation by Integrating DRAMSim2 into McSimA+, MDPI Electronics, Vol. 7, No. 8, pp. 152DOI
Nguyen D. T., Nguyen H. H., Kim H., Lee H.-J., May 2020, An Approximate Memory Architecture for Energy Saving in Deep Learning Applications, IEEE Trans. Circuits Syst. I Reg. Papers, Vol. 67, No. 5, pp. 1588-1601DOI
Calculating Memory System Power for DDR3 , 2007, Micron TechnologyGoogle Search
Jia Y., 2014, Caffe: Convolutional architecture for fast feature embedding, in Proc. 22nd ACM Int. Conf. Multimed., pp. 675-678DOI
Krizhevsky A., Sutskever I., Hinton G. E., 2012, ImageNet classification with deep convolutional neural networks, in Proc. Adv. Neural Inf. Process. Syst., Vol. 1, pp. 1097-1105Google Search
Jung M., 2017, A Platform to analyze DDR3 DRAM’s power and retention time, IEEE Des. Test, pp. 52-59DOI
Russakovsky O., Deng J., Su H., Krause J., Satheesh S., Ma S., Huang Z., Karpathy A., Khosla A., Bernstein M., Berg A., Fei-Fei L., 2015, ImageNet large scale visual recognition challenge, Int. J. Comput. Vis., pp. 211-252DOI
Han S., Pool J., Tran J., Dally W. J., , Learning both weights and connections for efficient neural network, . [Online]. Available: 02626.Google Search
Nguyen D. T., Nguyen T. N., Kim H., 2019, A High-Throughput and Power-Efficient FPGA Implementation of YOLO CNN for Object Detection, IEEE Trans. Very Large Scale Integr. (VLSI) Syst., Vol. 27, No. 8, pp. 1861-1873DOI
Lin D. D., Talathi S. S., Annapureddy V. S., 2016, Fixed Point Quantization of Deep Convolutional Networks, in Proc. Int. Conf. Mach. Learn. (ICML), pp. 2849-2858Google Search
Restegari M., Ordonez V., Redmon J., Farhadi A., , XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks, [Online]. Available: Search
Nguyen D. T., Lee H.-J., Kim H., Chang I.-J., 2020, An approximate DRAM design with a flexible refresh scheme for low power deep learning applications, in Int. Conf. on Electronics Information and CommunicationGoogle Search