Mobile QR Code QR CODE

References

1 
A. Biswas and A. P. Chandrakasan, ``Conv-RAM: An energy-efficient SRAM with embedded convolution computation for low-power CNN based machine learning applications,'' Proc. of 2018 IEEE International Solid-State Circuits Conference - (ISSCC), pp. 488-490, 2018DOI
2 
X. Si, Y.-N. Tu, W.-H. Huang, J.-W. Su, P.-J. Lu, and J.-H. Wang, ``15.5 A 28nm 64Kb 6T SRAM computing-in-memory macro with 8b MAC operation for AI edge chips,'' 2020 IEEE International Solid-State Circuits Conference - (ISSCC), pp. 246-248, 2020DOI
3 
M. Kang, S. K. Gonugondla, A. Patil, and N. R. Shanbhag, ``A multi-functional in-memory inference processor using a standard 6T SRAM array,'' IEEE Journal of Solid-State Circuits, vol. 53, no. 2, pp. 642-655, Feb. 2018DOI
4 
H. Jia, M. Ozatay, Y. Tang, H. Valavi, R. Pathak, J. Lee, and N. Verma, ``15.1 A programmable neural-network inference accelerator based on scalable in-memory computing,'' Proc. of 2021 IEEE International Solid-State Circuits Conference (ISSCC), pp. 236-238, 2021.DOI
5 
J. Yue, X. Feng, Y. He, Y. Huang, Y. Wang, and Z. Yuan, ``15.2 A 2.75-to-75.9TOPS/W computing-in-memory NN processor supporting set-associate block-wise zero skipping and ping-pong CIM with simultaneous computation and weight updating,'' Proc. of 2021 IEEE International Solid-State Circuits Conference (ISSCC), pp. 238-240, 2021.DOI
6 
R. Guo, Z. Yue, X. Si, T. Hu, H. Li, and L. Tang, ``15.4 A 5.99-to-691.1TOPS/W tensor-train in-memory-computing processor using bit-level-sparsity-based optimization and variable-precision quantization,'' Proc. of 2021 IEEE International Solid-State Circuits Conference (ISSCC), pp. 242-244, 2021.DOI
7 
E. Park, S. Yoo, and P. Vajda, ``Value-aware quantization for training and inference of neural networks,'' Proc. of the European Conference on Computer Vision (ECCV), pp. 580-595, 2018.DOI
8 
A. Krizhevsky and G. Hinton, ``Learning multiple layers of features from tiny images,'' University of Toronto, Toronto, ON, USA, Technical Report, vol. 1, no. 4, p. 7, 2009.URL
9 
J.-H. Kim, J. Lee, J. Lee, J. Heo, and J.-Y. Kim, ``Z-PIM: A sparsity-aware processing-in-memory architecture with fully variable weight bit-precision for energy-efficient deep neural networks,'' IEEE Journal of Solid-State Circuits, vol. 56, no. 4, pp. 1093-1104, Apr. 2021DOI
10 
J. Lee, J. Kim, W. Jo, S. Kim, S. Kim, and H.-J. Yoo, ``ECIM: Exponent computing in memory for an energy efficient heterogeneous floating-point DNN training processor,'' IEEE Micro, vol. 42, no. 1, pp. 99-107, 2022.DOI
11 
J. Deng, W. Dong, R. Socher, L. Li, K. Li, and F.-F. Li, ``ImageNet: A large-scale hierarchical image database,'' Proc. of 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255, 2009DOI