Tomasz Ł

In some services and industries, security and quality control comes hand in hand. In businesses such as banking, where pretty much every aspect of the customer service process gets recorded (as the service mostly takes place nearby of cash), it is hard to overlook CCTV systems as a source of data for quality control purposes. It turned out that despite the considerable demand for a decent service regarding quality control over CCTV, there is a big hole in the market (at least there was when I started the project). There has not been much done in this niche to utilize automation to provide more accurate results to the service recipient while reducing operators' occupancy by automating tasks that do not require human interaction. (however, you might notice this is not an actual name of the service, but rather an alias used for this presentation) has been designed to fill this hole, and the concept went to reality in just about a month.

How it all started

In 2016, I got a request from one of the international banks regarding the possibility of optimizing the quality control service the bank already has been using for quite some time. The service results have not been satisfactory to the client, as the way the provider has performed the service was leaving huge room for error (the human element), resulting in data neither accurate nor useful from the perspective of the report's recipient. I took a weekend to come up with a possible solution, and a week later, I presented the first working (not fully featured yet) version of the platform. After another three weeks, I was ready to deploy.

The problem

It turned out that the initial service was limited to an operator manually logging into every DVR on the service recipient's network (all login information were stored in some Excel sheet…), one by one (twice a day), watching 5 minutes of live footage. An operator was then supposed to respond to a set of questions that the client would like to get answered. Everything has been done manually, and there was no way to confront once generated report with the reality – again, the recipient's questions were answered based on live footage at random moments during the day, and none of this footage got persisted at a side of the service provider. The bank did not have any central access to the recordings on the DVRs across their branches network. Yet, still, companies are willing to pay for such service…

The solution

The solution was rather obvious – I had to automate tasks that could be performed after a simple configuration without engaging an operator. Additionally, I saw room for improvement in the way operators work. By implementing a simple interface to present only a minimal set of information required to validate the footage against the set of recipient's questions, I was able to significantly boost human resources utilization and performance by eliminating this manual work overhead related to logging to the devices, etc. I achieved that by using a scheduler, to perform footage capturing from the DVR automatically over HTTP and to store captured media on the solution's storage (S3 bucket in this case). As the initial service has been using 5 minutes long video, I made one more enhancement here. I did not see the need to use video. I observed that regular recipient's questions set does not justify watching a (not that dynamic) video for such a long time, and still images would suffice, actually producing more reliable results, as the scheduler could capture images more frequently than just twice a day. Such enhancement raises a chance to stumble across an incident of the service recipient's quality protocol being violated. Such an approach did eliminate the problem of service availability in case of the operator's unavailability, as all automated captures got persisted. Delayed frames could be validated as the operator becomes available (it gets a bit more complex than that, as there is some task queueing involved because in regular operation, the system depends on multiple operators, and the tasks should be distributed accordingly). 
Another advantage here is that historical data present on the system can be anytime (within two years from capture date) revalidated against a different set of questions if some more would appear. Additionally, the operator's work becomes much more simple and less stressful, as reports get generated and sent automatically upon completion, where it is the system's responsibility to keep track of the report's progress (each validated "chunk" of a report is checked against all captured data for a particular period of time, set individually for a client as a reporting interval).
Moving on to the technologies, I implemented back-end in Laravel as a microserviced monolith. Laravel has some cool scheduling functions (run with cron) and some convenient wrapper around FFmpeg (I used it for capturing media over HTTP and RTMP). It just went well with my needs on this one. On the other hand, as I chose VueJS + SemanticUI (jQuery, yay!) for the front-end, getting SSR to work with PHP turned out to be quite painful. Monolithic architecture has been chosen. Initially, the solution has been designed to work for only one (however big) client, with a couple of hundred DVRs on their network to be using it. The time was crucial here, so I got the job done from the concept to the deployment in a bit over 1,5 months, and I chose technologies that I was the most confident about, according to my previous experience.

Where I go next

I maintain full rights to my technology, so the initial concept evolved a bit since I first came up with it. I have been working on converting the solution to the SaaS model, where different service providers could use my platform to serve their clients. I find it pointless for me to get involved in recruiting and keeping operators on my own, and I would focus on the technology instead and let others do their business with my tools at their service. As the SaaS project is still alive and not released yet, I decided not to disclose the actual name and visual identification.