Download PDFOpen PDF in browserCNN-based Classification of Illustrator Style in Graphic Novels: Which Features Contribute Most?EasyChair Preprint 55712 pages•Date: October 4, 2018AbstractCan classification of graphic novel illustrators be achieved by convolutional neural network features (CNN) evolved for classifying concepts on photographs? Assuming that basic features at lower network levels generically represent invariants of our environment, they should be reusable. However, features at what level of abstraction are characteristic of illustrator style? We tested transfer learning by classfiying roughly 50,000 digitized pages from about 200 comic books of the Graphic Narrative Corpus (GNC) by illustrator. For comparison, we also classified Manga109 by book. We tested the predictability of visual features by experimentally varying which of the mixed layers of Inception V3 was used to train classifiers. Overall, the top-1 test-set classification accuracy in the artist attribution analysis increased from 92% for mixed-layer 0 to over 97% when adding mixed-layers higher in the hierarchy. Above mixed-layer 5, there were signs of overfitting, suggesting that texture-like mid-level vision features were sufficient. Experiments varying input material show that page layout and coloring scheme are important contributors. Thus, stylistic classification of comics artists is possible re-using pretrained CNN features, given only a limited amount of additional training material. We propose that CNN features are general enough to provide the foundation of a visual stylometry, potentially useful for comparative art history. Keyphrases: Classification, Convolutional Neural Network, cnn based classification, experimental study, graphic novels, stylometry
|