版权说明 操作指南
首页 > 成果 > 详情

Twin Non-local Attention Network with Frame-Similarity Loss for Video Instance Lane Detection

认领
导出
Link by DOI
反馈
分享
QQ微信 微博
成果类型:
会议论文
作者:
Guo, Lei;Xie, Fuqiang;Yang, Bin
通讯作者:
Xie, FQ
作者机构:
[Xie, Fuqiang; Guo, Lei; Yang, Bin] Univ South China, Sch Elect Engn, Hengyang, Peoples R China.
通讯机构:
[Xie, FQ ] U
Univ South China, Sch Elect Engn, Hengyang, Peoples R China.
语种:
英文
关键词:
component;Deep learning;Lane detection;non-local attention;frame-similarity loss
期刊:
2022 8TH INTERNATIONAL CONFERENCE ON HYDRAULIC AND CIVIL ENGINEERING: DEEP SPACE INTELLIGENT DEVELOPMENT AND UTILIZATION FORUM, ICHCE
年:
2022
页码:
651-655
会议名称:
8th International Conference on Hydraulic and Civil Engineering - Deep Space Intelligent Development and Utilization Forum (ICHCE)
会议时间:
NOV 25-27, 2022
会议地点:
Xian, PEOPLES R CHINA
会议主办单位:
[Guo, Lei;Xie, Fuqiang;Yang, Bin] Univ South China, Sch Elect Engn, Hengyang, Peoples R China.
出版地:
345 E 47TH ST, NEW YORK, NY 10017 USA
出版者:
IEEE
ISBN:
978-1-6654-6553-3
基金类别:
National Natural Science Foundation of China [61871210]
机构署名:
本校为第一且通讯机构
院系归属:
电气工程学院
摘要:
Lane detection plays an important role in autonomous driving. For video instance lane detection, both global spatial and temporal information is significantly important. However, the global spatial features and the temporal features are not been well exploited in recent studies. In this work, we address the video instance lane detection task by capturing global context based on non-local attention network. Specifically, we designed a twin non-local attention network to extract long-range dependencies along the spatial and temporal dimensions, respectively. Meanwhile, the global spatial and tem...

反馈

验证码:
看不清楚,换一个
确定
取消

成果认领

标题:
用户 作者 通讯作者
请选择
请选择
确定
取消

提示

该栏目需要登录且有访问权限才可以访问

如果您有访问权限,请直接 登录访问

如果您没有访问权限,请联系管理员申请开通

管理员联系邮箱:yun@hnwdkj.com