Recent research has explored leveraging event cameras, known for their prowess in capturing scenes with nonuniform motion, for video deraining, leading to performance improvements. However, the existing event-based method still faces the challenge that the complex spatiotemporal distribution disrupts temporal information fusion and complicates feature separation. This article proposes a novel end-to-end learning framework for video deraining that effectively extracts the rich dynamic information provided by the event stream. Our framework incorporates two key modules: an event-aware motion detection (EAMD) module that adaptively aggregates multiframe motion information using event-driven masks and a pyramidal adaptive selection module that separates background and rain layers by leveraging contextual priors from both event and conventional camera data. To facilitate efficient training, we introduce a real-world dataset of synchronized rainy videos and event streams. Extensive evaluations on both synthetic and real-world datasets demonstrate the superiority of our proposed method compared to state-of-the-art approaches. The code is available at https://github.com/booker-max/EGVD.
| UI | MeSH Term | Description | Entries |
|---|