Skip to main content

Table 2 The performance of various models

From: Denoised recurrence label-based deep learning for prediction of postoperative recurrence risk and sorafenib response in HCC

Model

1/4

1/3

1/2

2/3

3/4

1/1

Average

Parameters

ResNet18

0.999

0.946

0.926

0.885

0.873

0.724

0.892

44.60 M

DenseNet121

0.998

0.959

0.915

0.887

0.881

0.700

0.890

26.53 M

Swin-Transformer

0.997

0.953

0.938

0.900

0.897

0.704

0.898

334.82 M

CNN-SASMa

0.997

0.975

0.951

0.916

0.901

0.737

0.913

46.94 M

CNN-SelfAttna

0.998

0.974

0.938

0.922

0.897

0.733

0.909

46.94 M

CNN-SelfAttn-Attna

0.997

0.970

0.941

0.913

0.902

0.743

0.911

47.06 M

CNN-CSAa

0.998

0.972

0.949

0.909

0.898

0.743

0.912

42.92 M

CNN-SSAa

0.997

0.965

0.932

0.887

0.888

0.718

0.898

46.90 M

  1. arepresents our proposed models in this study. All three combined CNN with self-attention mechanisms. CNN-SASM is the pathological feature extractor. CNN-SelfAttn was constructed by removing the softmax layer of CNN-SASM. CNN-SelfAttn-Attn was developed by substituting the softmax layer of CNN-SASM to the conventional attention layer. Both of them were applied to explore the role of the softmax layer. CNN-CSA is the CNN-SASM without the SSA module, and CNN-SSA is the CNN-SASM without the CSA module