Skip to main content

Table 2 Success rates of all twelve scoring functions on eight different datasets. The first four methods are DL-based, and the rest are classical methods (details in Methods) section

From: A comprehensive survey of scoring functions for protein docking models

Dataset

Method

Top1

Top10

Top25

Top100

Top200

CAPRI Score

v2022 Difficult

(39,130 models)

DeepRank-GNN

14

35

33

65

78

GNN-DOVE

0

7

28

42

57

dMaSIF

14

32

46

78

85

PIsToN

17

28

46

71

78

FireDock

0

0

3

28

46

AP-PISA

0

24

17

46

57

CP-PIE

7

25

39

60

85

PyDock

0

7

7

35

46

ZRANK2

0

14

17

46

53

RosettaDock

3

10

25

46

60

SIPPER

3

14

35

53

60

HADDOCK

3

14

21

46

60

CAPRI Score

v2022 Easy

(41,191 models)

DeepRank-GNN

35

79

82

92

100

GNN-DOVE

15

48

56

76

84

dMaSIF

51

71

89

97

100

PIsToN

41

82

87

94

97

FireDock

5

23

35

53

69

AP-PISA

15

33

51

79

87

CP-PIE

35

64

74

97

97

PyDock

5

17

25

58

76

ZRANK2

20

41

61

87

97

RosettaDock

23

43

64

87

97

SIPPER

25

58

74

92

100

HADDOCK

15

69

71

84

97

CAPRI Score

Refined

(16,666 models)

DeepRank-GNN

23

46

53

69

84

GNN-DOVE

15

61

69

76

100

dMaSIF

15

38

53

76

92

PIsToN

38

69

76

76

100

FireDock

0

0

7

23

46

AP-PISA

0

0

7

46

53

CP-PIE

23

46

69

69

84

PyDock

0

0

0

7

30

ZRANK2

7

38

53

61

61

RosettaDock

7

38

53

69

76

SIPPER

7

46

53

69

76

HADDOCK

7

23

53

69

76

BM4

(7,600 models)

DeepRank-GNN

21

73

73

89

94

GNN-DOVE

21

68

84

100

100

dMaSIF

47

68

89

94

100

PIsToN

54

89

89

94

100

FireDock

26

84

89

89

94

AP-PISA

26

78

89

94

94

CP-PIE

52

78

84

100

100

PyDock

15

63

84

94

94

ZRANK2

42

78

89

94

94

RosettaDock

42

73

89

94

94

SIPPER

47

63

68

78

94

HADDOCK

26

89

94

94

100

BM5

(7,500 models)

DeepRank-GNN

93

100

100

100

100

GNN-DOVE

6

26

33

53

73

dMaSIF

40

80

100

100

100

PIsToN

66

93

100

100

100

FireDock

0

0

0

6

13

AP-PISA

0

6

6

13

26

CP-PIE

93

100

100

100

100

PyDock

0

0

0

6

6

ZRANK2

0

0

6

6

20

RosettaDock

13

46

53

66

80

SIPPER

60

93

93

100

100

HADDOCK

0

0

0

6

26

Dockground

(6,725 models)

DeepRank-GNN

37

53

72

100

100

GNN-DOVE

70

91

94

100

100

dMaSIF

34

75

87

100

100

PIsToN

15

55

81

100

100

FireDock

1

18

48

98

100

AP-PISA

1

6

17

98

100

CP-PIE

22

56

75

100

100

PyDock

0

8

20

93

100

ZRANK2

5

25

53

100

100

RosettaDock

3

31

56

96

100

SIPPER

43

72

89

100

100

HADDOCK

10

44

65

100

100

PDB 2023

(5,300 models)

DeepRank-GNN

36

48

67

100

100

GNN-DOVE

3

19

34

100

100

dMaSIF

51

84

94

100

100

PIsToN

88

96

98

100

100

FireDock

0

3

11

100

100

AP-PISA

0

1

5

100

100

CP-PIE

76

92

96

100

100

PyDock

0

0

3

100

100

ZRANK2

0

11

23

100

100

RosettaDock

3

17

32

100

100

SIPPER

50

75

84

100

100

HADDOCK

0

1

1

100

100

  1. Bold values indicate the best value for that column