版权说明 操作指南
首页 > 成果 > 详情

A Unified Transformer Framework for Group-Based Segmentation: Co-Segmentation, Co-Saliency Detection and Video Salient Object Detection

认领
导出
Link by DOI
反馈
分享
QQ微信 微博
成果类型:
期刊论文
作者:
Su, Yukun;Deng, Jingliang;Sun, Ruizhou;Lin, Guosheng;Su, Hanjing;...
通讯作者:
Lin, GS;Wu, QY
作者机构:
[Su, Yukun; Deng, Jingliang; Sun, Ruizhou] South China Univ Technol, Sch Software Engn, Key Lab Big Data & Intelligent Robot, Minist Educ, Guangzhou 510006, Peoples R China.
[Lin, Guosheng] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore.
[Su, Hanjing] Tencent, Shenzhen 518000, Peoples R China.
[Wu, Qingyao] South China Univ, Sch Software Engn, Pazhou Lab, Guangzhou 510006, Peoples R China.
[Wu, Qingyao] Peng Cheng Lab, Shenzhen 518066, Peoples R China.
通讯机构:
[Wu, QY ] S
[Lin, GS ] N
Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore.
South China Univ, Sch Software Engn, Pazhou Lab, Guangzhou 510006, Peoples R China.
语种:
英文
关键词:
Co-object;long-range dependency;transformer;activation
期刊:
IEEE Transactions on Multimedia
ISSN:
1520-9210
年:
2024
卷:
26
页码:
313-325
基金类别:
10.13039/501100001809-National Natural Science Foundation of China (Grant Number: 62272172) 10.13039/501100021171-Basic and Applied Basic Research Foundation of Guangdong Province (Grant Number: 2023A1515012920) 10.13039/501100013111-Tip-Top Scientific and Technical Innovative Youth Talents of Guangdong Special Support Program (Grant Number: 2019TQ05X200) 2022 Tencent Wechat Rhino-Bird Focused Research Program (Grant Number: RBFR2022008) Major Key Project of PCL (Grant Number: PCL2021A09) 10.13039/501100001381-National Research Foundation Singapore (Grant Number: AISG-RP-2018-003) MOE AcRF Tier-1 Research (Grant Number: RG95/20)
机构署名:
本校为通讯机构
摘要:
Humans tend to mine objects by learning from a group of images or several frames of video since we live in a dynamic world. In the computer vision area, many researchers focus on co-segmentation (CoS), co-saliency detection (CoSD) and video salient object detection (VSOD) to discover the co-occurrent objects. However, previous approaches design different networks for these similar tasks separately, and they are difficult to apply to each other. Besides, they fail to take full advantage of the cues among inter- and intra-feature within a group of images. In this paper, we introduce a unified fr...

反馈

验证码:
看不清楚,换一个
确定
取消

成果认领

标题:
用户 作者 通讯作者
请选择
请选择
确定
取消

提示

该栏目需要登录且有访问权限才可以访问

如果您有访问权限,请直接 登录访问

如果您没有访问权限,请联系管理员申请开通

管理员联系邮箱:yun@hnwdkj.com