Rules

Participants

The challenge is open to participants both from industry and academia.

Competition

The goal of the competition is to provide a completely automated algorithm that will detect each embolus in the test CTPAs. After registering on this site, a team can download the set of computed tomography pulmonary angiograms. The CTPAs are separated in two batches: 20 scans for training and another 20 scans for testing. A reference standard is provided for the training batch. The organizers keep a similar reference standard for the testing set of CTPAs. Each team then submits their detection of PEs in the testing batch, and a description of their algorithm.

Algorithms are expected to automatically mark the position of the emboli by placing markers within them. No user interaction is allowed. Each marker may be accompanied with a confidence score. If several markers fall within the same embolus, only the marker with the highest confidence score will be taken into account and all the rest will be disregarded.

The markers should be given in a text file with a line per each embolus. The format of each line should be:

Scan X Y Z Confidence

where X, Y , and Z are given in pixel coordinates. Participants are also required to provide a threshold value for the confidence score. Such threshold will be used to compute the evaluation metrics.

By the deadline, participants are requested to provide a file with the required format for evaluation and an abstract with the description of the algorithm. The participants are encouraged, but not required, to share their code as open source.

Evaluation Metric

A robust and meaningful evaluation metric is key to objectively compare the results obtained using different algorithms. In this challenge we will evaluate the proposed algorithms by finding their sensitivity, average number of false positives per scan and positive predictive value. The average distance and standard deviation from false detections to the closest embolus will be indicated. We will also plot the precision recall curve associated with the findings of the algorithm.

Sensitivity is defined as the number of emboli detected divided by the number of total emboli in the test scans. The average number of false positives per scan is computed by adding all false positive detections and dividing by the number of scans in the testing set. Positive predictive value is measured as the ratio of positive detections with respect to all detections. Precision is another word for positive predictive value and recall is the equivalent as sensitivity.

In order to produce such measurements, the following entities are defined:

  • True positive: Any location provided by the algorithm that is within an embolus. If there are several detections assigned the same embolus, only the detection with the highest confidence score will be taken into account.
  • False positive: Any location that is outside of any embolus .
  • False negative: any embolus without any detection inside it.

Since the border of emboli are doubtful, the measurements will be repeated with a tolerance margin of ϵ = 2 mm and ϵ = 5 mm from each emboli.

Software will be provided in Matlab to perform the evaluation in the training set. We recommend that the participants do leave-one-out cross-validation for training.

Additional evaluation:

Additionally, and beyond the challenge, the performance of the algorithms will be evaluated with a curve of true positive rate with respect to false positives per scan. The confidence score will be used to produce such curves. An example of the curves can be found in [14]:

Evaluation Curve