top of page

Triplanar convolution with shared 2D kernels for 3D classification and shape retrieval

Noh K.J.     Park S.J.     Lee S.   Lee K.M

Background

Observation of blood vessels is crucial in the diagnosis and intervention of many diseases. Clinicians have mainly relied on manual inspections, which can be operator-dependent and time- consuming. Over the years, the demand for efficiency has led to the development of numerous methods for automatic vessel segmentation.

Our Contribution VGN

We present a novel CNN architecture, the vessel graph network (VGN), that jointly exploits the global structure of vessel shape together with local appearances. The VGN comprises three components, i) a CNN module for generating pixel wise features and vessel probabilities, ii) a GNN module to extract features which reflect the vascular connectivity, and iii) an inference module to produce the final segmentation. The input graph for the GNN is generated in an additional graph construction module.

Figure 1. Motivation of the proposed method. In the presented example results, VGN clearly enhances the detection of vessels with weak contrast by considering the vessel graph structure,compared to that of a CNN-only method. The resulting vessel probability images are inverted for better visualization.

Figure 2. Overall network architecture of VGN comprising the CNN, GNN, and inference modules. The CNN module generates pixelwise features corresponding to vessel probabilities, whereas the GNN module generates features which reflect the vascular connectivity.

Figure 3. Detailed network architecture for the proposed VGN.

Results

We extensively perform comparative evaluations on four retinal image datasets and a coronary artery X-ray angiography dataset, showing that the proposed method outperforms or is on par with current state-of-the-art methods in terms of the average precision and the area under the receiver operating characteristic curve.

Figure 4. Average precision (AP) scored by the proposed VGN according to the vertex sampling sparsity δ for the DRIVE, STARE, CHASE_DB1, HRF and CA-XRA datasets.

Figure 5. Precision-recall curves of the proposed VGN and comparable methods on the DRIVE, STARE, CHASE_DB1, HRF and CA-XRA datasets. Average precision (AP) and max F1 scores, in percentages (%) are also given in the legends.

Table 1. Accuracy (Acc), specificity (Sp), sensitivity (Se), and the area under the receiver operating characteristic (ROC) curve (AUC) of the proposed VGN and comparable methods on the DRIVE, STARE, CHASE_DB1, HRF and CA-XRA datasets. P-values that are obtained by conducting a paired t-test between the AUC values of the proposed VGN and each comparable method are also presented for indicating statistical significance of improvements.

Figure 6. Each couple of rows represents qualitative results of two representative samples from the DRIVE dataset.

Table 2. APs (% points) scored by the proposed VGN for the three subsets of the HRF test set, each of which contains 10 images from patients that are healthy (H), have diabetic retinopathy (DR), or are glaucomatous (G)

Acknowledgement

Table 3. Possible types of difference for vessel segmentation mask comparison.

This work was supported by the National Research Foundation of Korea (NRF) funded by the Korean Government (MSIT and MOE) under Grants NRF-2017R1A2B2011862, NRF-2019R1F1A1063656, and NRF-2019R1A2C1085113.

Paper

Deep vessel segmentation by learning graphical connectivity – Medical Image Analysis, Volume 58, December 2019, 101556.

Shin S.Y.; Lee S.; Il D.Y.; Lee K.M.

Elsevier

[code] [link] [pdf] [Bibtex]

그림1.png

KOOKMIN UNIVERSITY, 77 JEONGNEUNG-RO,

SEONGBUK-GU, SEOUL, 02707, KOREA

02707 서울특별시 성북구 정릉로 77

국민대학교 미래관 504-2호

bottom of page