J. Imaging, Vol. 9, Pages 3: Auguring Fake Face Images Using Dual Input Convolution Neural Network

Figure 1. Sample images extracted from the dataset. Note that (a,b) are fake image, whereas (c,d) represent real images.

Figure 1. Sample images extracted from the dataset. Note that (a,b) are fake image, whereas (c,d) represent real images.

Jimaging 09 00003 g001

Figure 2. The proposed model to augur doctored images into fake and real.

Figure 2. The proposed model to augur doctored images into fake and real.

Jimaging 09 00003 g002

Figure 3. Train and validation results of 10th fold from proposed DICNN. (a) Depicts 99.36 ± 0.62% training accuracy and 99.30 ± 0.94% validation accuracy. (b) Conveys 0.19 ± 0.31 of training loss and 0.092 ± 0.13 of validation loss.

Figure 3. Train and validation results of 10th fold from proposed DICNN. (a) Depicts 99.36 ± 0.62% training accuracy and 99.30 ± 0.94% validation accuracy. (b) Conveys 0.19 ± 0.31 of training loss and 0.092 ± 0.13 of validation loss.

Jimaging 09 00003 g003

Figure 4. Considering fake and real categories, (a) shows the SHAP results for a fake image, whereas (b) shows the SHAP results for a real image.

Figure 4. Considering fake and real categories, (a) shows the SHAP results for a fake image, whereas (b) shows the SHAP results for a real image.

Jimaging 09 00003 g004

Figure 5. Confusion matrix for test split of 10th fold.

Figure 5. Confusion matrix for test split of 10th fold.

Jimaging 09 00003 g005

Table 1. Summary details for DICNN architecture.

Table 1. Summary details for DICNN architecture.

Layer NameShape of OutputParam #Connected toInput 1(None, 224, 224, 3)0-Input 2(None, 224, 224, 3)0-Conv2D(None, 222, 222, 32)896Input 1Flatten 1(None, 150,528)0Input 2Flatten 2(None, 1,577,088)0Conv2DConcatenate Layer(None, 1,727,616)0[Flatten 1, Flatten 2]Dense 1(None, 224)386,986,208Concatenate LayerDropout(None, 224)(None, 224)Dense 1Dense 2(None, 2)450DropoutTotal params: 386,987,554 Trainable params: 386,987,554 Non-trainable params: 0

Table 2. TA, VA, TL, VL, TsA, TsL, and BD for the DICCN model, standing for training accuracy, training loss, validation accuracy, validation loss, test accuracy, test loss, and the number of bad predictions from the model for K = 10-fold in %).

Table 2. TA, VA, TL, VL, TsA, TsL, and BD for the DICCN model, standing for training accuracy, training loss, validation accuracy, validation loss, test accuracy, test loss, and the number of bad predictions from the model for K = 10-fold in %).

TATLVAVLTsATsLBPK199.900.0036100.009.78 × 10 −599.000.040K297.990.623698.450.2445100.000.012K399.907.84 × 10 −4100.002.11 × 10 −599.000.090K499.610.0082100.000.003697.670.040K599.320.9420100.001.07 × 10 −599.220.030K698.840.185197.670.357999.110.623K798.740.126199.220.063299.220.071K899.610.0122100.000.001499.220.010K999.710.025497.670.245498.450.303K10100.000.0037100.000.0039100.000.010μ±σ99.36 ± 0.620.19 ± 0.3199.30 ± 0.940.092 ± 0.1399.08 ± 0.640.122 ± 0.180.9 ± 1.22

Table 3. K = 10-fold results (after 20 epochs, in %): for specificity (Spec), sensitivity (Sen), precision (Pre), F1 score (Fsc), and recall (Rec).

Table 3. K = 10-fold results (after 20 epochs, in %): for specificity (Spec), sensitivity (Sen), precision (Pre), F1 score (Fsc), and recall (Rec).

SpecSenPreFscRecK1Fake99.34100.0098.3199.9899.15Real100.0099.3498.5698.7899.56K2Fake97.26100.0096.5598.25100.00Real100.0097.26100.0098.6197.26K3Fake100.00100.00100.0099.1299.34Real100.00100.0099.5498.6799.76K4Fake99.50100.0098.1299.3499.89Real100.0099.5098.9099.3898.86K5Fake100.00100.00100.00100.00100.00Real100.00100.00100.00100.00100.00K6Fake96.25100.00100.0098.0996.25Real100.0096.2594.2397.03100.00K7Fake96.1099.25100.0099.2097.34Real99.2596.1095.3298.3099.89K8Fake100.00100.00100.00100.00100.00Real100.00100.00100.00100.00100.00K9Fake95.71100.00100.0097.8195.71Real100.0095.7195.1697.52100K10Fake100.00100.00100.00100.00100.00Real100.00100.00100.00100.00100.00μ±σFake98.41 ± 1.7599.93 ± 0.2399.23 ± 1.1599.18 ± 0.8198.77 ± 1.59Real99.93 ± 0.2398.41 ± 1.7598.17 ± 2.2098.83 ± 0.9899.53 ± 0.83

Table 4. Comparison of proposed DICNN model with other state-of-the-art methods. ’DL’ and ’Acc’ stand for deep learning and accuracy, respectively.

Table 4. Comparison of proposed DICNN model with other state-of-the-art methods. ’DL’ and ’Acc’ stand for deep learning and accuracy, respectively.

RefCategoryMethodDatasetPerformance (%)XAI[20]DLXception Network150,000 imagesAcc: 83.99%No[21]DLCNN60,000 imagesAcc: 97.97%No[22]DLdual-channel CNN9000 imagesAcc: 100%No[23]DLCNN321,378 face imagesAcc: 92%No[27]DLNaive classifiersFaces-HQAcc: 100%No[29]DLVGG10,000 real and fake imageAcc: 99.9%No[29]DLResNet10,000 real and fake imageAcc: 94.75%No[30]DLTwo Stream CNN30,000 imagesAcc: 88.80%No[32]PhysicalCorneal specular highlight1000 imagesAcc: 94%No[33]HumanVisual400 imagesAcc: 50-60%NoOursDLDICNN1289 imagesAcc: 99.36 ± 0.62SHAP

Comments (0)

No login
gif