End-to-end Behavioral Cloning using NVIDIA's CNN Architecture
This project implements an end-to-end deep learning approach for autonomous vehicle control using Behavioral Cloning in the Udacity Car Simulator. The convolutional neural network learns to predict steering angles by mimicking human driving behavior.
- Implements NVIDIA's proven end-to-end learning architecture for autonomous driving
- Multi-camera training with left, center, and right camera views
- Comprehensive data augmentation for generalization
- Successfully drives on Track 2 despite training only on Track 1
The model uses NVIDIA's proven architecture with 5 convolutional layers followed by 4 fully connected layers:
Input: 66Γ200Γ3 RGB Image
ββ Normalization Layer (x/127.5 - 1.0)
ββ Conv2D: 24 filters, 5Γ5, stride 2Γ2, ELU
ββ Conv2D: 36 filters, 5Γ5, stride 2Γ2, ELU
ββ Conv2D: 48 filters, 5Γ5, stride 2Γ2, ELU
ββ Conv2D: 64 filters, 3Γ3, ELU
ββ Conv2D: 64 filters, 3Γ3, ELU
ββ Dropout (0.5)
ββ Flatten (1164 neurons)
ββ Dense: 100 neurons, ELU
ββ Dense: 50 neurons, ELU
ββ Dense: 10 neurons, ELU
ββ Output: 1 neuron (Steering Angle)
Total Parameters: 348,219
tensorflow>=2.8.0
keras>=2.8.0
opencv-python>=4.5.0
python-socketio>=5.5.0
eventlet>=0.33.0
pillow>=9.0.0- Clone the repository:
git clone https://github.com/yourusername/AutoPilot.git
cd AutoPilot- Install dependencies:
pip install -r requirements.txt- Download Udacity Car Simulator:
Track 1 - Simple track used for training
Track 2 - Complex track for testing generalization
- Launch the simulator and select TRAINING MODE
- Select Track 1 and click RECORD
- Drive smoothly for 2-3 laps (center lane + recovery maneuvers)
- Include both clockwise and counter-clockwise laps
Data Structure:
driving_log.csv: Contains image paths and steering anglesIMG/: Three camera perspectives (left, center, right)
Critical augmentation techniques for generalization:
- Crop & Resize: Remove sky and hood to focus on road
- Horizontal Flip: Eliminate directional bias
- Random Shift: Simulate off-center driving
- Brightness Adjustment: Handle different weather conditions
- Random Shadows: Adapt to varying lighting
- Random Blur: Simulate camera limitations
| Parameter | Value |
|---|---|
| Input Shape | 66Γ200Γ3 |
| Learning Rate | 0.0001 |
| Epochs | 50 |
| Batch Size | 32 |
| Train/Val Split | 80/20 |
| Steering Correction | Β±0.25 |
| Dropout | 0.5 |
python behavioral_cloning.pyThe best model is automatically saved as model_best.h5 based on validation loss.
- Launch the simulator and select AUTONOMOUS MODE
- Choose a track (Track 1 or Track 2)
- Run the drive script:
python drive.py model_best.h5Optional: Set target speed
python drive.py model_best.h5 --speed 15| Metric | Value |
|---|---|
| Final Training Loss | 0.0089 |
| Final Validation Loss | 0.0092 |
| Track 1 Performance | β Complete lap, smooth driving |
| Track 2 Performance | β Successful generalization |
The model successfully generalizes to Track 2 (unseen during training) thanks to comprehensive data augmentation and the robust NVIDIA architecture.
AutoPilot/
βββ behavioral_cloning.py # Training script
βββ drive.py # Autonomous driving script
βββ requirements.txt # Dependencies
βββ driving_log.csv # Training data log
βββ IMG/ # Training images
βββ model_best.h5 # Trained model
βββ docs/ # Documentation images


