Skip to content

This is Pytorch re-implementation of our CVPR 2020 paper "Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation" (https://arxiv.org/abs/1911.10194)

License

Notifications You must be signed in to change notification settings

bowenc0221/panoptic-deeplab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

88 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Panoptic-DeepLab (CVPR 2020)

Panoptic-DeepLab is a state-of-the-art bottom-up method for panoptic segmentation, where the goal is to assign semantic labels (e.g., person, dog, cat and so on) to every pixel in the input image as well as instance labels (e.g. an id of 1, 2, 3, etc) to pixels belonging to thing classes.

Illustrating of Panoptic-DeepLab

This is the PyTorch re-implementation of our CVPR2020 paper based on Detectron2: Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation. Segmentation models with DeepLabV3 and DeepLabV3+ are also supported in this repo now!

News

  • [2021/01/25] Found a bug in old config files for COCO experiments (need to change MAX_SIZE_TRAIN from 640 to 960 for COCO). Now we have also reproduced COCO results (35.5 PQ)!
  • [2020/12/17] Support COCO dataset!
  • [2020/12/11] Support DepthwiseSeparableConv2d in the Detectron2 version of Panoptic-DeepLab. Now the Panoptic-DeepLab in Detectron2 is exactly the same as the implementation in our paper, except the post-processing has not been optimized.
  • [2020/09/24] I have implemented both DeepLab and Panoptic-DeepLab in the official Detectron2, the implementation in the repo will be deprecated and I will mainly maintain the Detectron2 version. However, this repo still support different backbones for the Detectron2 Panoptic-DeepLab.
  • [2020/07/21] Check this Google AI Blog for Panoptic-DeepLab.
  • [2020/07/01] More Cityscapes pre-trained backbones in model zoo (MobileNet and Xception are supported).
  • [2020/06/30] Panoptic-DeepLab now supports HRNet, using HRNet-w48 backbone achieves 63.4% PQ on Cityscapes. Thanks to @PkuRainBow.

Disclaimer

What's New

  • We release a detailed technical report with implementation details and supplementary analysis on Panoptic-DeepLab. In particular, we find center prediction is almost perfect and the bottleneck of bottom-up method still lies in semantic segmentation
  • It is powered by the PyTorch deep learning framework.
  • Can be trained even on 4 1080TI GPUs (no need for 32 TPUs!).

How to use

We suggest using the Detectron2 implementation. You can either use it directly from the Detectron2 projects or use it from this repo from tools_d2/README.md.

The differences are, official Detectron2 implementation only supports ResNet or ResNeXt as the backbone. This repo gives you an example of how to use your a custom backbone within Detectron2.

Note:

Model Zoo (Detectron2)

Cityscapes panoptic segmentation

Method Backbone Output
resolution
PQ SQ RQ mIoU AP download
Panoptic-DeepLab (DSConv) R52-DC5 1024×2048 60.3 81.0 73.2 78.7 32.1 model
Panoptic-DeepLab (DSConv) X65-DC5 1024×2048 61.4 81.4 74.3 79.8 32.6 model
Panoptic-DeepLab (DSConv) HRNet-48 1024×2048 63.4 81.9 76.4 80.6 36.2 model

Note:

  • This implementation uses DepthwiseSeparableConv2d (DSConv) in ASPP and decoder, which is same as the original paper.
  • This implementation does not include optimized post-processing code needed for deployment. Post-processing the network outputs now takes more time than the network itself.

COCO panoptic segmentation

Method Backbone Output
resolution
PQ SQ RQ Box AP Mask AP download
Panoptic-DeepLab (DSConv) R52-DC5 640×640 35.5 77.3 44.7 18.6 19.7 model
Panoptic-DeepLab (DSConv) X65-DC5 640×640 - - - - - model
Panoptic-DeepLab (DSConv) HRNet-48 640×640 - - - - - model

Note:

  • This implementation uses DepthwiseSeparableConv2d (DSConv) in ASPP and decoder, which is same as the original paper.
  • This implementation does not include optimized post-processing code needed for deployment. Post-processing the network outputs now takes more time than the network itself.

Citing Panoptic-DeepLab

If you find this code helpful in your research or wish to refer to the baseline results, please use the following BibTeX entry.

@inproceedings{cheng2020panoptic,
  title={Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation},
  author={Cheng, Bowen and Collins, Maxwell D and Zhu, Yukun and Liu, Ting and Huang, Thomas S and Adam, Hartwig and Chen, Liang-Chieh},
  booktitle={CVPR},
  year={2020}
}

@inproceedings{cheng2019panoptic,
  title={Panoptic-DeepLab},
  author={Cheng, Bowen and Collins, Maxwell D and Zhu, Yukun and Liu, Ting and Huang, Thomas S and Adam, Hartwig and Chen, Liang-Chieh},
  booktitle={ICCV COCO + Mapillary Joint Recognition Challenge Workshop},
  year={2019}
}

If you use the Xception backbone, please consider citing

@inproceedings{deeplabv3plus2018,
  title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation},
  author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam},
  booktitle={ECCV},
  year={2018}
}

@inproceedings{qi2017deformable,
  title={Deformable convolutional networks--coco detection and segmentation challenge 2017 entry},
  author={Qi, Haozhi and Zhang, Zheng and Xiao, Bin and Hu, Han and Cheng, Bowen and Wei, Yichen and Dai, Jifeng},
  booktitle={ICCV COCO Challenge Workshop},
  year={2017}
}

If you use the HRNet backbone, please consider citing

@article{WangSCJDZLMTWLX19,
  title={Deep High-Resolution Representation Learning for Visual Recognition},
  author={Jingdong Wang and Ke Sun and Tianheng Cheng and 
          Borui Jiang and Chaorui Deng and Yang Zhao and Dong Liu and Yadong Mu and 
          Mingkui Tan and Xinggang Wang and Wenyu Liu and Bin Xiao},
  journal={TPAMI},
  year={2019}
}

Acknowledgements

We have used utility functions from other wonderful open-source projects, we would espeicially thank the authors of:

Contact

Bowen Cheng (bcheng9 AT illinois DOT edu)