Vision-Based Lane Following and Traffic Sign Recognition for Resource-Constrained Autonomous Vehicles
arXiv cs.CV / 4/28/2026
📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research
Key Points
- The paper proposes a lightweight, vision-based perception framework for resource-constrained autonomous vehicles that combines lane detection, lane tracking, and traffic sign recognition.
- For lane tracking, it uses a computationally efficient threshold-based lane segmentation approach with perspective transformation and histogram-based curvature estimation to stay robust under varying illumination.
- A rule-based steering controller translates the perceived lane information into steering commands to maintain stable navigation.
- For sign recognition, the study evaluates two lightweight CNNs—EfficientNet-B0 and MobileNetV2—trained on a custom dataset captured from an onboard camera.
- Experiments indicate real-time performance with accurate lane tracking (up to 3.16% maximum offset RMSE), with EfficientNet-B0 showing higher classification accuracy (98.77% offline, 90% on-device real-time) while MobileNetV2 is faster and cheaper computationally.
Related Articles

Black Hat USA
AI Business

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
How I Automate My Dev Workflow with Claude Code Hooks
Dev.to

Claude Haiku for Low-Cost AI Inference: Patterns from a Horse Racing Prediction System
Dev.to

How We Built an Ambient AI Clinical Documentation Pipeline (and Saved Doctors 8+ Hours a Week)
Dev.to