We did a sample of photos at the Scott Coffee Race to test the Google Vision rates of reads (accomplished within several minutes of uploading photos). Check out a couple of the sample Google Vision got right on these screenshots – impressive.
100 Photos
119 Correct Tags
16 Incorrect Tags
32 Additional Tags that a human could read
5 Tagged with 2016 (the year was on the bib)
Races using the platform will need to decide whether to hire humans (or have Mechanical Turk provide the humans) to manually tag and correct errors, or whether to allow for crowdsourcing.
Tag reading is very dependent on the photos, angle of bibs and color mix. For example, this race had far worse reads. For that race, the bibs were harder to read from a positioning and font and reflection perspective. The results were:
- 50 Correct Tags (example 235, 211)
- 15 Incorrect tags (example 42)
- 17 Missed Tags that a Human would be able to see

Hints for best Read Rates:
- Large numbers with white space around the number. Do not crowd the number with sponsor logos.
- White background and Dark font is best.
- Camera angle as directly in front as possible – sideways numbers do not read as well.
- High quality images.
- The larger the number appears in the photo, the better. Zoom in with a good lense to get runners filling the view.
As time goes on, RunSignUp will be able to do some filtering (for example seeing repeated 2016 across multiple times and eliminate it), and Google Vision will obviously get better.
One thought on “Google Vision Photo Tagging Sample”