A. Protocols

For each track (of all three tracks), participants submit the results of glyph spotting for all query test in testing subset in one XML file following the ground truth XML format. For Track Mixed, participants also submit a small, simple and complete (if libraries are required) executable package of their method implementation, with a clear user manual to run the glyph spotting process for a given example of manuscript image and a given set of queries.

B. Evaluation

We calculated Mean Average Precision (mAP) of spotting area based on ground truth glyph character-level annotated patch images of the testing subset. A spotting area is considered as relevant if it overlaps more than 50% of a ground truth glyph character-level patch area containing the same query glyph (A detected bounding box will be considered correct match if the overlapping area between it and the reference bounding box surpasses a certain threshold (0.5)).
The evaluation tool from ICFHR 2016 Handwritten Keyword Spotting Competition (H-KWS 2016)  will be used by the organizers to compute the mAP of each team in each track. Contestants may use it to evaluate their systems on the validation data. The evaluation result will be separated for each different evaluation track. A single winner team will be chosen on each track.