Lane detection plays an important role in autonomous driving. For video instance lane detection, both global spatial and temporal information is significantly important. However, the global spatial features and the temporal features are not been well exploited in recent studies. In this work, we address the video instance lane detection task by capturing global context based on non-local attention network. Specifically, we designed a twin non-local attention network to extract long-range dependencies along the spatial and temporal dimensions, respectively. Meanwhile, the global spatial and tem...