Synthetic-video-generation-for-Autonomous-cars

A huge challenge for autonomous vehicles(ACs) is to have a dataset that captures real-world multitudinous driving conditions. The currently available video datasets are not annotated & most of them aren't high resolution videos which is again an impediment for object detection. I am excited to solve this problem by annotating & generating photo-realistic synthetic video dataset for ACs using DeepLab, conditional GANs.

View on GitHub

Synthetic video generation for autonomous vehicles

:sassy_man: Don’t want to read, watch a 6-min video :smiley: !

Why I have selected this problem?

Statistically, each year up to 1.2 million deaths occur due to car accidents across the globe are caused by human errors. Autonomous vehicle technology could drastically avoid these accidents. Self-driving car companies are constantly trying to make their autonomous vehicles more robust by capturing a wide distribution of all possible driving scenarios but have failed to achieve at this point due to past recurring crashes. These autonomous systems actually learn from driving videos and the problem with currently available video datasets are

  1. Not annotated.
  2. Most of them aren’t high-resolution videos which is again an impediment for object detection.

What I am offering?

An AI software solution with help of which you can:

  1. Generate Semantic Segmentation Masks for existing videos.

  1. Generate photo-realistic, high-resolution new driving videos.

How my Full Architechure looks like?

It mainly constitutes of 3 components:

  1. Video to frame sequence generator using OpenCV.
  2. Generation of Semantic Segmentation masked frames for each associated frame sequences using DEEPLAB model.
  3. Generating new photo-realistic, high-resolution videos from the sequence of Semantic Segmentation masked frames using conditional Generative Networks framework.

The full architecture: (Starting from LEFT to RIGHT.)