Mobile QR Code QR CODE

  1. (SK hynix, Icheon, 17336, Korea)



WLP, TSV, HBM, deep learning, segmentation, chemical vapor deposition film

I. INTRODUCTION

The SiN/SiO$_{2}$ film deposition process, as shown in Fig. 1, is a passivation film created to protect the device after Through Silicon Via (TSV) protrusion in the WLPKG TSV SI Dry Etch process. In order to determine whether the process is abnormal after the process and whether the process can proceed, a total for every single wafer inspection is performed. After total inspection, lot flow has been established after manual verification by line operators based on defective limit samples, as shown in Fig. 2 showing which defect modes can be waived and which wafers should be scraped, and as a result of the SiN/SiO$_{2}$ film process defective history investigation, 69% of the false negative case had occurred resulting in unnecessary simple work loss such as re-verification defect by engineers. Moreover, the turnaround time for manufacturing HBM devices has increased, and human error risk due to unexamined defects has occurred during the manual verification process.

Therefore, in this study, we introduce a technique for automatically detecting defects in the inspection step image using two deep-learning segmentation models and try to prove its performance through experiments. It aims to detect defective areas more accurately and reduce the over-inspection rate, further promoting unmanned inspection verification work to reduce the overall turnaround time, human error risks, and unnecessary simple work loss.

Fig. 1. TSV Process Flow & SiN/SiO$_{2}$ Film Dep.
../../Resources/ieie/JSTS.2023.23.5.251/fig1.png
Fig. 2. Defective Limit Samples.
../../Resources/ieie/JSTS.2023.23.5.251/fig2.png
Fig. 3. CAM Process [4].
../../Resources/ieie/JSTS.2023.23.5.251/fig3.png

II. RELATED RESEARCH: DEEP LEARNING SEGMENTATION MODELS

1. Class Activation Mapping (CAM)

In Convolution Neural Network (CNN) classification model, the moment the last convolutional layer flattens the output and passes it to the fully connected layer, the information held by the filter disappears. However, the last filter of the CNN can be preserved by using Global Average Pooling (GAP) instead of flattening. On the CAM model, like a normal CNN architect, class prediction is made from softmax output through a fully connected layer with weights from the last convolution and fully connected layers. Weights from the fully connected layer become the feature weight of each class, and each weight has a feature vector from each class. As a result, once the feature map from the last convolution layer and weights from the fully connected layer are multiplied, the image made from the Class Activation Mapping (CAM) model shows which area of the image helps to classify with a given class. The expression of the area of class objects through the preserved filter is regarded as a defect. CAM can extract defects through unsupervised learning if sufficient accuracy of the CNN classification model is secured.

Since CNN model is known as the most powerful tool for an image classification, we consider CAM model to be both effective defect classifier and segmentation extractor.

2. U-Net

U-Net is End-to-End, fully convolutional network-based model. As shown in Fig. 4, it was named U-Net because of the shape of the network (‘U’). It has shown good performance in the field of tumor detection with a small amount of training data in medical electron microscopy images. Moreover, the segmentation image made by this network also has the advantage of being sophisticated. To collect overall context information of the image and exact localization, the architect of U-Net is symmetrical. On the left side of the U-Net architect shown in Fig. 4, the contracting path is to downsample the given image by using a normal CNN layer. With a contracting path, image context can be extracted. On the right side of the U-Net architect, expanding path is to localize the given context.

As cell image shown at Fig. 4, medical electron microscopy image looks similar with wafer image we’re dealing with. Moreover, as suggested on U-Net paper [1] shown 30 images with excessive augmentation successfully trained U-Net model which is similar to our circumstance, we consider U-Net to be suitable for semiconductor field. However, to train U-Net model, only supervised learning is possible with a label mask produced.

Fig. 4. U-Net Architect [1].
../../Resources/ieie/JSTS.2023.23.5.251/fig4.png

III. EXPERIMENT

1. Data Collection and Preprocessing

For data collection, all verified defect lot lists were collected at the SiN/SiO$_{2}$ film deposition inspection step performed on the SK hynix WLPKG TSV Line, from the defect list, true negative defect images were selected by the process engineer in charge. After defect images were selected, we transformed imaged to be grayscale and resized image to 256x256x1. To train the CNN defect classification model for CAM, we augmented total of 22 defect images to 472 images in order for the CNN model to train various types of defect modes and to solve the imbalance problem of the dataset. Since augmented images were made by flipping and shifting, the wafer lookalike images were produced with the region of the defect itself was changed. Based on the domain knowledge of the process engineer in charge of the collected defect image and the defective limit sample, ground truth segmentation was performed, as shown in Fig. 5, after the defective label to create a segmentation mask for the defective image.

Fig. 5. Defective Image Preprocessing for U-Net Example.
../../Resources/ieie/JSTS.2023.23.5.251/fig5.png

2. Model Fitting

Prior to performing CAM, it is necessary to secure a high-performance CNN model. CAM was implemented with ResNet architect, which has the highest performance record among existing image classification models. Due to the nature of P&T, defect image is very rare. In order to solve data imbalance between normal/defect images, as shown on Table 1, defect images were augmented from 4 images to 328 images for training model, from 8 images to 64 images for validating model, and from 10 images to 80 images for test model to train CNN model.

And for U-Net model, since we had to train only segmentation mask for defect region, we augmented 58 defect and segmentation mask images to 6000 images by using Keras.ImageDataGenerator with argument as following: 0.2 rotation, 0.05 width & height shift, 0.05 shear, 0.05 zoom range, horizontal & vertical flip and fill mode so that location of defect-like image on wafer varied as shown on Fig. 6.

After augmented defect images, we trained the CNN model with a train and valid data set with 1,000 epochs. Since we lacked defect images, we used pretrained ResNet-50 with binary classification instead of multi-class classification for CNN model and we could get 96.8% for test accuracy. After training, we changed the architect of the CNN model by changing the flattened layer to a global average pooling layer which helped us to get a segmentation image of the defect.

For U-Net training, we used same network structure from U-Net paper shown on Fig. 4 but changed size of first layer of network since we used 256x256x1 image. Also, we only used defect images with segment masks for training. While training, we used binary cross entropy as a loss function and set 10,000 epochs. As we trained the model, we found although loss from binary cross entropy was well saturated with the Mean IoU value grown, after a certain point, loss stopped falling, and the Mean IoU value stopped increasing, which we found our model was well saturated as shown in Fig. 7.

While training the U-Net model, we checked if the model was well trained by making a segmentation image from the model we trained. With certain points with binary cross entropy and mean IoU were not fallen, although early stopping paused training, we found the model had to be trained more because segmentation images were not clear before saturation, as shown in Fig. 8.

For segmentation image made by CAM model, to compare with segmentation image with U-Net model properly, we converted image to grayscale image.

After both CAM model and U-Net model were trained, we could get segmentation image of defect areas.

Fig. 6. Augmented Image Example.
../../Resources/ieie/JSTS.2023.23.5.251/fig6.png
Fig. 7. Training U-Net Model.
../../Resources/ieie/JSTS.2023.23.5.251/fig7.png
Table 1. Collected Defective Image Amount

CAM

U-Net

Train

Valid

Test

Ground

Mask

Fail

4 → 328

8 → 64

10 → 80

Each 58 → 6000

Pass

360

73

87

N/A

(→ : Augmentation)

Total 992

Table 2. Learning Plan

CAM

U-Net

Epoch

Best Eval

Remark

1,000

Test Acc = 96.8%

ResNet

10,000

IoU = 75.3 %

Keras.ImageGen

IV. EXPERIMENT RESULT

As shown in Fig. 9, we found that the CAM and U-Net models can create a segmentation image of the defect area. In the segmentation image created from both models with the same defect image, the ground truth mask and segmentation image of U-Net result in the defective area fitting into a narrower range. However, in the CAM segmentation image, it was confirmed that a wider area and even non-defective areas were determined as defective areas, which showed both models could be used for defect detection for our inspection image.

Although CAM could only show approximate location with a lack of accuracy, CAM had a certain advantage of using unsupervised learning, which didn’t need to create a ground truth mask. Since creating ground truth requires quite an amount of time, it will be helpful to train with only normal/defect labels of images, saving time in building a defect image detector.

On the other hand, although making ground truth masks takes lots of time, due to the U-Net model has trained with ground truth masks with defect image, U-Net showed better segmentation image of the defect. Moreover, As shown in Fig. 8, not only the delamination defect area of the film but also the void area, the last input image on the right side of Fig. 8, which was normally unexamined from manual inspection, was well detected and created a segmentation image of the defect area.

To compare the performance of the two generated segmentation models, Jaccard Similarity, which measures the similarity of the entire image, and Mean IoU, which the similarity of the segmentation, were used. As shown in Table 3, Both Jaccard Similarity and Mean IoU value for U-Net were evaluated to be higher compared to CAM segmented image.

Fig. 8. U-Net Segmentation Image Result.
../../Resources/ieie/JSTS.2023.23.5.251/fig8.png
Fig. 9. CAM vs. U-Net Segmentation Image.
../../Resources/ieie/JSTS.2023.23.5.251/fig9.png
Table 3. Testing Result

CAM

U-Net

Jaccard Similarity

0.960

0.995

Mean IoU

0.036

0.370

V. CONCLUSION

In this paper, a segmentation model for SK hynix WLPKG TSV line SiN/SiO$_{2}$ film deposition process was constructed to detect the defective area of the inspected defective image. Through ResNet-CAM Model and U-Net Model, it was possible to obtain segmentation images of defective areas in both models, but we were confirmed that the results of segmentation images through U-Net were superior to ResNet-CAM. In addition, it is possible to detect void areas of SiN/SiO$_{2}$ film that were difficult to verify in the manual verification in the past, and it is confirmed that small size void, which can be verified as not defect, was ignored on the segmented image, which will help to contribute unexamined or over-examined rate reduction.

Moreover, it will be possible to secure additional reliability for U-Net-based segmentation images by further learning about defective images collected additionally in the future and creating a model for each defect type and also be possible to detect the defective area at similar inspection steps, such as photo process photoresist inspection. When securing the model's reliability through securing the added image, it will be possible to automate the existing manual verification method by establishing and applying the real-time verifier system.

ACKNOWLEDGMENTS

References

1 
Ronneberger, Olaf, et al. “U-Net: Convolutional Networks for Biomedical Image Segmentation.” Lecture Notes in Computer Science, Nov. 2015, pp. 234-241, https://doi.org/10.1007/978-3-319-24574-4_28.DOI
2 
Selvaraju, Ramprasaath R., et al. “Grad-Cam: Visual Explanations from Deep Networks via Gradient-Based Localization.” 2017 IEEE International Conference on Computer Vision (ICCV), 2017, https://doi.org/10.1109/iccv.2017.74.DOI
3 
Dong, Xinghui, et al. “Small Defect Detection Using Convolutional Neural Network Features and Random Forests.” Lecture Notes in Computer Science, Sept. 2019, pp. 398-412, https://doi.org/10.1007/978-3-030-11018-5_35.DOI
4 
Zhou, Bolei, et al. “Learning Deep Features for Discriminative Localization.” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, https://doi.org/10.1109/cvpr.2016.319.DOI
5 
Batool, Uzma, et al. “A Systematic Review of Deep Learning for Silicon Wafer Defect Recognition.” IEEE Access, vol. 9, 2021, pp. 116572-116593, https://doi.org/10.1109/access.2021.3106171.DOI
6 
Kim, Dongil, et al. “Machine Learning-Based Novelty Detection for Faulty Wafer Detection in Semiconductor Manufacturing.” Expert Systems with Applications, vol. 39, no. 4, 2012, pp. 4075-4083, https://doi.org/10.1016/j.eswa.2011.09.088.DOI
7 
Devika, B, and Neetha George. “Convolutional Neural Network for Semiconductor Wafer Defect Detection.” 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT), 2019, https://doi.org/10.1109/icccnt45670.2019.8944584.DOI
8 
Chen, Xiaoyan, et al. “A Light-Weighted CNN Model for Wafer Structural Defect Detection.” IEEE Access, vol. 8, 2020, pp. 24006-24018, https://doi.org/10.1109/access.2020.2970461.DOI
9 
Kim, Tongwha, and Kamran Behdinan. “Advances in Machine Learning and Deep Learning Applications towards Wafer Map Defect Recognition and Classification: A Review.” Journal of Intelligent Manufacturing, 2022, https://doi.org/10.1007/s10845-022-01994-1.DOI
Intae Whoang
../../Resources/ieie/JSTS.2023.23.5.251/au1.png

Intae Whoang received the B.S degree in Industrial Engineering from Purdue University, Indiana, United States of America in 2017. He joined SK hynix Inc., Icheon, Korea, in 2018, where he has been working in wafer level packaging technology team. His research interest is process optimization and development for thinfilm and dry etch process to enhance productivity and to reduce cost of HBM products.

Chinkwan Cho
../../Resources/ieie/JSTS.2023.23.5.251/au2.png

Chinkwan Cho received the Computer Science B.S & MBA in Dongguk and Yonsei University respectively. He joined SK hynix Inc., Icheon and designed and developed many kinds of data analysis system for memory semiconductor product manufacturing since 2007. He is still working on finding the best solution for sensing and amplifying the micro-variance of operation data to predict the evaluation result.

Jin Hee Hong
../../Resources/ieie/JSTS.2023.23.5.251/au3.png

Jin Hee Hong received the B.S degree in Material Engineering from Sungkyunkwan University, Suwon, South Korea in 2017. She joined SK hynix Inc., Icheon, Korea, in 2018, where she has been working in wafer level packaging technology team. Her research interest is improving efficiency of thinfilm equipment to enhance productivity and reduce cost of HBM products.

Dong Hee Son
../../Resources/ieie/JSTS.2023.23.5.251/au4.png

Dong Hee Son received the B.S degree in Aero Space Engineering from Inha University, Incheon, South Korea in 2017. He joined SK hynix Inc., Icheon Korea, in 2018, where he has been working in wafer level packaging technology team. His research interest is improving efficiency of dry etch equipment to enhance productivity and reduce cost of HBM products.

Byung Yoon Lim
../../Resources/ieie/JSTS.2023.23.5.251/au5.png

Byung Yoon Lim received the B.S degree in Mechanical and Material Engineering from The Australian National University, Canberra, Australia in 2014. He joined SK hynix Inc., Icheon Korea, in 2018, where he has been working in wafer level packaging technology team. His research interest is improving efficiency of dry etch equipment to enhance productivity and reduce cost of HBM products.

Jin Pyung Kim
../../Resources/ieie/JSTS.2023.23.5.251/au6.png

Jin Pyung Kim received the B.S degree in Electrical Engineering from Chung-Ang University, Seoul, South Korea in 2013. He joined SK hynix Inc., Icheon, Korea, in 2013, where he has been working in advanced package development team. His research interest is yield enhancement for HBM products.

Ki-jun Bang
../../Resources/ieie/JSTS.2023.23.5.251/au7.png

Ki-jun Bang received the B.S degree in Electronic Control Engineering from Kumoh National Institute of Technology, Gumi, South Korea. He Joined SK hynix Inc., Icheon, Korea, in 2006 and has been working in wafer level packaging technology team since 2009. He is currently research manager of the photo, dry etch, and thinfilm process.