Ting Chen | Pix2Seq: A New Language Interface for Object Detection and Beyond
Sponsored by Evolution AI: https://www.evolution.ai/
Papers: https://ai.googleblog.com/2022/04/pix2seq-new-language-interface-for.html
Abstract (from Ting Chen): We present Pix2Seq, a simple and generic framework for object detection. Unlike existing approaches that explicitly integrate prior knowledge about the task, we simply cast object detection as a language modeling task conditioned on the observed pixel inputs. Object descriptions (e.g., bounding boxes and class labels) are expressed as sequences of discrete tokens, and we train a neural net to perceive the image and generate the desired sequence. Our approach is based mainly on the intuition that if a neural net knows about where and what the objects are, we just need to teach it how to read them out. Beyond the use of task-specific data augmentations, our approach makes minimal assumptions about the task, yet it achieves competitive results on the challenging COCO dataset, compared to highly specialized and well optimized detection algorithms. I’ll also cover the newest extension of the framework to tackle multiple vision tasks within a single model (no task specific “heads” or “losses”), including object detection, instance segmentation, keypoint detection, and image captioning.
Bio: Ting Chen is a research scientist of the Google Brain team. His current research interests include self-supervised representation learning, generative modeling, efficient architectures and generalist learning principles. Before joining Google, he received his Ph.D. in Computer Science from UCLA.
source