First release: 26 October 2017 www.sciencemag.org (Page numbers not final at time of first release) 1
The ability to learn and generalize from a few examples is a hallmark of human intelligence (1). CAPTCHAs, images used by websites to block automated interactions, are examples of problems that are easy for humans but difficult for comput-ers. CAPTCHAs are hard for algorithms because they add clutter and crowd letters together to create a chicken-and-egg problem for character classifiers — the classifiers work well for characters that have been segmented out, but segmenting the individual characters requires an understanding of the characters, each of which might be rendered in a combinato-rial number of ways (2–5). A recent deep-learning approach for parsing one specific CAPTCHA style required millions of labeled examples from it (6), and earlier approaches mostly relied on hand-crafted style-specific heuristics to segment out the character (3, 7); whereas humans can solve new styles without explicit training (Fig. 1A). The wide variety of ways in which letterforms could be rendered and still be under-stood by people is illustrated in Fig. 1.
Building models that generalize well beyond their train-ing distribution is an important step toward the flexibility Douglas Hofstadter envisioned when he said that “for any program to handle letterforms with the flexibility that human beings do, it would have to possess full-scale artificial intelli-gence” (8). Many researchers have conjectured that this could be achieved by incorporating the inductive biases of the vis-ual cortex (9–12), utilizing the wealth of data generated by neuroscience and cognitive science research. In the mamma-lian brain, feedback connections in the visual cortex play roles in figure-ground-segmentation, and in object-based top-down attention that isolates the contours of an object even when partially transparent objects occupy the same spatial locations (13–16). Lateral connections in the visual co
2019-12-21 20:33:33
14.88MB
FCN网络
1