Publications
publications by categories in reversed chronological order
2025
- arXiv PreprintStreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context ModelingMeng Wei*, Chenyang Wan*, Xiqian Yu*, Tai Wang*, Yuqiang Yang, Xiaohan Mao, Chenming Zhu, Wenzhe Cai , Hanqing Wang, Yilun Chen, Xihui Liu†, and Jiangmiao Pang†arXiv preprint arXiv:2507.05240, 2025
Vision-and-Language Navigation (VLN) in real-world settings requires agents to process continuous visual streams and generate actions with low latency grounded in language instructions. While Video-based Large Language Models (Video-LLMs) have driven recent progress, current VLN methods based on Video-LLM often face trade-offs among fine-grained visual understanding, long-term context modeling and computational efficiency. We introduce StreamVLN, a streaming VLN framework that employs a hybrid slow-fast context modeling strategy to support multi-modal reasoning over interleaved vision, language and action inputs. The fast-streaming dialogue context facilitates responsive action generation through a sliding-window of active dialogues, while the slow-updating memory context compresses historical visual states using a 3D-aware token pruning strategy. With this slow-fast design, StreamVLN achieves coherent multi-turn dialogue through efficient KV cache reuse, supporting long video streams with bounded context size and inference cost. Experiments on VLN-CE benchmarks demonstrate state-of-the-art performance with stable low latency, ensuring robustness and efficiency in real-world deployment.
@article{wei2025streamvln, title = {StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling}, author = {Wei, Meng and Wan, Chenyang and Yu, Xiqian and Wang, Tai and Yang, Yuqiang and Mao, Xiaohan and Zhu, Chenming and Cai, Wenzhe and Wang, Hanqing and Chen, Yilun and Liu, Xihui and Pang, Jiangmiao}, journal = {arXiv preprint arXiv:2507.05240}, year = {2025}, dimensions = {true}, }