Model | input size | inference time (ms) | GPU memory (GB) | Batch size |
---|
Ours | 270\(\times\)270 | 45.6 | 8.4 | 16 |
HoVer-Net [53] | 270\(\times\)270 | 82.3 | 11.2 | 16 |
Mask2former [54] | 270\(\times\)270 | 63.7 | 9.8 | 16 |
Triple U-net [55] | 270\(\times\)270 | 58.9 | 10.5 | 16 |
- All measurements were conducted on a single NVIDIA RTX 4090 GPU. Inference time is averaged over 1000 runs. GPU memory consumption includes model parameters and intermediate features