Civic Crowdsourcing

In December 2020, the Hon’ble High Court directed Karnataka State Legal
Services Authority to inspect the roads and note their condition. This was in response to a writ petition filed by citizens against BBMP regarding bad maintenance of roads and footpaths.
The Reap Benefit Chatbot was used as a means for citizens to instantaneously report these problems. This data was used as evidence for the report submitted to the High Court.

Decisions made for public good are often taken by representatives, government,
scholars or policy makers. If the actual stakeholders – the public - are involved, then this will result in more substantial outcomes. An average portal where you can log complaints will require you to log in, give your details, first. After which you can start the process of reporting - send the location, send the image, send the landmark, send the description, send the duration - at least 6 steps. This would be laborious for the public to register multiple complaints and no steps would be take to ensure resolution. This was the motivation behind our Whatsapp Chatbot. Almost everyone uses Whatsapp, it’s very easy to send an image across to a contact.

We aim to establish a connection between the public and the authorities using an easy-to-use portal. It will be a single platform for multiple civic issues. The updates and progress will be available to view on the dashboard, making the process transparent, and keeping both parties informed and accountable. While we will rely on some part on authorities, we also want to take matters into our own hands and involve other organisations and communities .e.g. organising
clean up drives or do spot-fixing of garbage spots.

Watch out for this post for further updates on our journey!


One learning from our pilot was that responders sometimes did not follow the
instructions in reporting the issues. This disrupted the pipeline and these particular data
points did not automatically register in our database. This had to be collected by
manually scouring the chats. The flow must be improved to efficiently engage with
the audience and allow for easy data collection. Alternatively, we have begun to classify different issues automatically using image processing. In this case, the
user has to just dump a landmark-annotated photo and we will identify and classify the
issue being reported.


We started out by detecting potholes in a given image. To carry out this task we have made use of YOLOv5. YOLO refers to “You Only Look Once” and is one of the most versatile and famous object detection models. For every real-time object detection work, YOLO is the first choice by Data scientists and Machine learning engineers.

YOLOv5 is the newest version of the YOLO object detection series and was released on 27th May 2020 by Glenn Jocher and was introduced using a PyTorch based approach.

YOLO algorithms divide all the given input images into the SxS grid system. Each grid is responsible for object detection. Now those Grid cells predict the boundary boxes for the detected object.

We first created a dataset of images with potholes using Roboflow.
Roboflow helps us structure our dataset and bring it to the required format.

The following code cells are used to install all the dependencies required by the YOLOv5 model.
The first code cell is used to clone the GitHub repository of the model.

!git clone  

%cd yolov5

!git reset --hard 886f1c03d839575afecb059accf74296fad395b6

!pip install -qr requirements.txt  

The second code cell sets up all the dependencies (things required by the model, PyTorch, NumPy, OpenCV, etc).

import torch

from IPython.display import Image, clear_output  # to display images

from utils.google_utils import gdrive_download  # to download models/datasets

print('Setup complete. Using torch %s %s' % (torch.__version__, torch.cuda.get_device_properties(0) if torch.cuda.is_available() else 'CPU'))`

We then import the dataset created using roboflow, and open the data.yaml file present in it.
This file defines where our directories are structured, and has a summary of all the classes in our model.

!pip install gdown

%cd /content

!gdown ""



%cat data.yaml

The model configuration and architecture is then defined.
There is an option to modify the configuration, but we stick to the defaults.

import yaml

with open("data.yaml", 'r') as stream:

    num_classes = str(yaml.safe_load(stream)['nc'])

%cat /content/yolov5/models/yolov5s.yaml

from IPython.core.magic import register_line_cell_magic


def writetemplate(line, cell):

    with open(line, 'w') as f:


%%writetemplate /content/yolov5/models/custom_yolov5s.yaml

# parameters

nc: {num_classes}  # number of classes

depth_multiple: 0.33  # model depth multiple

width_multiple: 0.50  # layer channel multiple

# anchors


  - [10,13, 16,30, 33,23]  # P3/8

  - [30,61, 62,45, 59,119]  # P4/16

  - [116,90, 156,198, 373,326]  # P5/32

# YOLOv5 backbone


  # [from, number, module, args]

  [[-1, 1, Focus, [64, 3]],  # 0-P1/2

   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4

   [-1, 3, BottleneckCSP, [128]],

   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8

   [-1, 9, BottleneckCSP, [256]],

   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16

   [-1, 9, BottleneckCSP, [512]],

   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32

   [-1, 1, SPP, [1024, [5, 9, 13]]],

   [-1, 3, BottleneckCSP, [1024, False]],  # 9


# YOLOv5 head


  [[-1, 1, Conv, [512, 1, 1]],

   [-1, 1, nn.Upsample, [None, 2, 'nearest']],

   [[-1, 6], 1, Concat, [1]],  # cat backbone P4

   [-1, 3, BottleneckCSP, [512, False]],  # 13

   [-1, 1, Conv, [256, 1, 1]],

   [-1, 1, nn.Upsample, [None, 2, 'nearest']],

   [[-1, 4], 1, Concat, [1]],  # cat backbone P3

   [-1, 3, BottleneckCSP, [256, False]],  # 17 (P3/8-small)

   [-1, 1, Conv, [256, 3, 2]],

   [[-1, 14], 1, Concat, [1]],  # cat head P4

   [-1, 3, BottleneckCSP, [512, False]],  # 20 (P4/16-medium)

   [-1, 1, Conv, [512, 3, 2]],

   [[-1, 10], 1, Concat, [1]],  # cat head P5

   [-1, 3, BottleneckCSP, [1024, False]],  # 23 (P5/32-large)

   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)


We can now proceed towards training our model!
We can specify a number of arguments at this stage, these include:

img: size of the input image
batch: determine batch size
epochs: number of training epochs.
data: set the path to our yaml file
cfg: specify our model configuration
weights: specify a custom path to weights.
name: result names
nosave: only save the final checkpoint
cache: cache images for faster training


%cd /content/yolov5/

!python --img 416 --batch 5 --epochs 200 --data '../data.yaml' --cfg ./models/custom_yolov5s.yaml --weights '' --name yolov5s_results  --cache

We can visualize the performance of our model using the tensorboard.

Now that our model has been trained, we save the model weights for future use.

from google.colab import drive


%cp /content/yolov5/runs/train/yolov5s_results/weights/ /content/gdrive/My\ Drive

We have thus created a custom yolo model, for detecting potholes!