Skip to content

🌐🚘 End-to-end Autonomous Driving implemented via Behavioral Cloning (BC) on the Udacity SDC platform. The core is a highly robust NVIDIA CNN architecture leveraging Deep Learning to map raw pixels directly to steering commands. It features advanced Image Augmentation for superior generalization on complex, unseen tracks.

License

Notifications You must be signed in to change notification settings

kbhujbal/Autopilot_CNN-self_driving_car_simulation_with_behavioral_clonning

Repository files navigation

Deep Learning Model for Simulating Self Driving Car

End-to-end Behavioral Cloning using NVIDIA's CNN Architecture

System Architecture


Overview

This project implements an end-to-end deep learning approach for autonomous vehicle control using Behavioral Cloning in the Udacity Car Simulator. The convolutional neural network learns to predict steering angles by mimicking human driving behavior.

Key Highlights

  • Implements NVIDIA's proven end-to-end learning architecture for autonomous driving
  • Multi-camera training with left, center, and right camera views
  • Comprehensive data augmentation for generalization
  • Successfully drives on Track 2 despite training only on Track 1

Network Architecture

NVIDIA CNN Architecture

NVIDIA Network Architecture

The model uses NVIDIA's proven architecture with 5 convolutional layers followed by 4 fully connected layers:

Input: 66Γ—200Γ—3 RGB Image
β”œβ”€ Normalization Layer (x/127.5 - 1.0)
β”œβ”€ Conv2D: 24 filters, 5Γ—5, stride 2Γ—2, ELU
β”œβ”€ Conv2D: 36 filters, 5Γ—5, stride 2Γ—2, ELU
β”œβ”€ Conv2D: 48 filters, 5Γ—5, stride 2Γ—2, ELU
β”œβ”€ Conv2D: 64 filters, 3Γ—3, ELU
β”œβ”€ Conv2D: 64 filters, 3Γ—3, ELU
β”œβ”€ Dropout (0.5)
β”œβ”€ Flatten (1164 neurons)
β”œβ”€ Dense: 100 neurons, ELU
β”œβ”€ Dense: 50 neurons, ELU
β”œβ”€ Dense: 10 neurons, ELU
└─ Output: 1 neuron (Steering Angle)

Total Parameters: 348,219


Installation

Requirements

tensorflow>=2.8.0
keras>=2.8.0
opencv-python>=4.5.0
python-socketio>=5.5.0
eventlet>=0.33.0
pillow>=9.0.0

Setup

  1. Clone the repository:
git clone https://github.com/yourusername/AutoPilot.git
cd AutoPilot
  1. Install dependencies:
pip install -r requirements.txt
  1. Download Udacity Car Simulator:

Dataset Collection

Available Tracks

Track 1 Track 1 - Simple track used for training

Track 2 Track 2 - Complex track for testing generalization

Collecting Data

  1. Launch the simulator and select TRAINING MODE
  2. Select Track 1 and click RECORD
  3. Drive smoothly for 2-3 laps (center lane + recovery maneuvers)
  4. Include both clockwise and counter-clockwise laps

Data Structure:

  • driving_log.csv: Contains image paths and steering angles
  • IMG/: Three camera perspectives (left, center, right)

Data Augmentation

Critical augmentation techniques for generalization:

  1. Crop & Resize: Remove sky and hood to focus on road
  2. Horizontal Flip: Eliminate directional bias
  3. Random Shift: Simulate off-center driving
  4. Brightness Adjustment: Handle different weather conditions
  5. Random Shadows: Adapt to varying lighting
  6. Random Blur: Simulate camera limitations

Training

Configuration

Parameter Value
Input Shape 66Γ—200Γ—3
Learning Rate 0.0001
Epochs 50
Batch Size 32
Train/Val Split 80/20
Steering Correction Β±0.25
Dropout 0.5

Run Training

python behavioral_cloning.py

The best model is automatically saved as model_best.h5 based on validation loss.


Testing in Autonomous Mode

  1. Launch the simulator and select AUTONOMOUS MODE
  2. Choose a track (Track 1 or Track 2)
  3. Run the drive script:
python drive.py model_best.h5

Optional: Set target speed

python drive.py model_best.h5 --speed 15

Results

Loss Over Epochs

Performance Metrics

Metric Value
Final Training Loss 0.0089
Final Validation Loss 0.0092
Track 1 Performance βœ… Complete lap, smooth driving
Track 2 Performance βœ… Successful generalization

The model successfully generalizes to Track 2 (unseen during training) thanks to comprehensive data augmentation and the robust NVIDIA architecture.


Project Structure

AutoPilot/
β”œβ”€β”€ behavioral_cloning.py      # Training script
β”œβ”€β”€ drive.py                    # Autonomous driving script
β”œβ”€β”€ requirements.txt            # Dependencies
β”œβ”€β”€ driving_log.csv            # Training data log
β”œβ”€β”€ IMG/                       # Training images
β”œβ”€β”€ model_best.h5              # Trained model
└── docs/                      # Documentation images

About

🌐🚘 End-to-end Autonomous Driving implemented via Behavioral Cloning (BC) on the Udacity SDC platform. The core is a highly robust NVIDIA CNN architecture leveraging Deep Learning to map raw pixels directly to steering commands. It features advanced Image Augmentation for superior generalization on complex, unseen tracks.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published