Semantic video segmentation: Exploring inference efficiency
Research output: Contribution to journal › Conference article › Research › peer-review
We explore the efficiency of the CRF inference beyond image level semantic segmentation and perform joint inference in video frames. The key idea is to combine best of two worlds: semantic co-labeling and more expressive models. Our formulation enables us to perform inference over ten thousand images within seconds and makes the system amenable to perform video semantic segmentation most effectively. On CamVid dataset, with TextonBoost unaries, our proposed method achieves up to 8% improvement in accuracy over individual semantic image segmentation without additional time overhead. The source code is available at https: //github. com/subtri/video inference.
Original language | English |
---|---|
Journal | ISOCC 2015 - International SoC Design Conference: SoC for Internet of Everything (IoE) |
Pages (from-to) | 157-158 |
Number of pages | 2 |
DOIs | |
Publication status | Published - 8 Feb 2016 |
Externally published | Yes |
Event | 12th International SoC Design Conference, ISOCC 2015 - Gyeongju, Korea, Republic of Duration: 2 Nov 2015 → 5 Nov 2015 |
Conference
Conference | 12th International SoC Design Conference, ISOCC 2015 |
---|---|
Country | Korea, Republic of |
City | Gyeongju |
Period | 02/11/2015 → 05/11/2015 |
Bibliographical note
Publisher Copyright:
© 2015 IEEE.
- approximate inference, co-labelling, higher-order-clique, semantic segmentation
Research areas
ID: 301828670