Tomasz Ł
ENGINEER, MUSIC PRODUCER
INDIGO MIND

TomaszLos.io

I have to admit that this project must have been the most demanding in my entire career as a software engineer. I have done a similar scope of work, but never in such a short timeframe. And I have never been my client.
The whole thing came up to life in about five weeks of development (and I mean development – not integration work) that I spent over four months. I consider it quite a challenge, as I had to think through the concept I had on my mind for a couple of years (and there always have been more important things to do than this). I had to design the architecture, develop some business logic, pick a proper technology stack to serve my needs, design the interface, implement the solution, test it (as good as possible), and prepare the infrastructure to deploy finally. Additionally, the technology itself would be useless without proper content to fill into it, so I had to write one, which was extremely challenging - the website's character is rather personal (despite obvious technological and business functions). Content is not only texts, so side-tasks come in the way, such as statics preparation - meaning a photoshoot I had to do in my living room. There is also the marketing part because somehow, you managed to reach this page, and it is quite unlikely you got the URL from me (does not apply to certain people). I will get to that later. All you need to know, for now, is that this turned out to be an exciting experiment. Everyone is invited.

How did we get here

At a certain point (3, maybe four years ago) I realized, that I already have some achievements (meaning the products of my work) in different fields, such as technology or music, and I always saw only more things coming up in the future. It led me to the idea that I should somehow catalog it and keep it growing as this might come useful when approaching new challenges, and who knows how this could end up. I wanted a kind of database for keeping track of my work and progress. I did not seriously consider making it available to the public. When I spent more time working with corporations with teams, I started taking it under consideration. When the time has come, I was required to share my knowledge beyond my usual technical work. I figured – why would I repeat myself? Then, my idea for the knowledge base evolved. I thought such functionality could be easily integrated with my personal website (that I intended to do for some time, as well), where people I am yet to develop could get not only technical knowledge but a broader idea about who do they work with, possibly leading to better understanding in our professional relations. Plus, if they enjoy some of my non-work-related content, I am happy to bring them some joy or inspiration. I have worked with multiple partners since this idea came to my mind, but I never had time to prove the concept finally. Till now…

What did I decide to go with finally

The time has come, and one of my partners that I have been spending most of my business time on (plus a massive amount of overtime) for quite some time, decided to end our partnership. I understood – this is the moment, and I decided to disappear from the market for a relatively short period, providing my services to a minimal number of partners over this period. As I already had the concept, there were only a few final things to consider before starting the project for good. I knew that I wanted to have a personal page / electronic business card with a universal knowledge base, but how could it work? I figured that conventional forms of knowledge bases (such as wikis) would not do much good for my purpose and would not fit the convention of a personal website, so I knew I needed to focus more on multimedia and capturing the input and the output in my everyday operation, in the most captivating way possible.  Eventually, the knowledge base part should be "switched on" in the form of a blog, as the demand for knowledge sharing arises again, or when I have some important thoughts, I would like to persist in the form of text or multimedia. However, since then, I realized one more thing – at this point, I did not have a strong need for hosting a public knowledge base. I could only go with a personal page, but this and previous experiences taught me that if I do not approach this project at its full complexity now, I will probably never get to do it in the future. Therefore, I was looking for some robust solution to fit with my highly dynamic needs, easy to maintain, and expand. I knew that this solution should be entirely mine, so I was looking at work from scratch. Without knowing the final target, I started to see this project as a big unknown without assumptions regarding strong goals. Still, more importantly – as an experiment, that cannot disappoint me.

Architecture

As the solution should be robust and scalable, I was looking at microservice architecture. On the other hand, I wanted to keep a common, more monolithic code base for most of the functionality. It seemed easier to maintain a relatively small application with a powerful back-end. With the use of containers and proper NGINX setup, such monolithic implementation can be very well microserviced. I like to use docker-compose in local development, which also saves me some resources by running one container locally for most of the back-end, instead of multiple (I decided to go with separate containers for serving images, music, videos, data, etc. on production). This configuration looks good "on paper" and proves itself. At least, I am happy with it, considering the stack I chose to implement the solution.
As for the front-end, the choice was pretty simple – JavaScript. I enjoy coding in VueJS, and as I had full freedom of choice, I decided to go with the Quasar Framework. Some fantastic job the team done here! Its convenience, extensive selection of UI components, out of the box SSR support, and reliable documentation make it a sure bet for such purposes. Not to mention Cordova and Capacitor support, for easy mobile app building, with almost no additional configuration.
As for the back-end, the choice was not that hard, either. I have done similar projects in Laravel, for instance, but I wanted something much lighter and faster. I am recently a fan of the concept that keeping consistent technology across both front-end and back-end is the future of development (I should perhaps elaborate on that in a blog post sometime soon). Therefore, I went with JavaScript again, more precisely NodeJS. When REST APIs are involved, I use ExpressJS. However, some services do not require it (communicating over queue or pub/sub).

Database and data dumping

I decided to go with the document database MongoDB, considering the experimental character of the project and assuming the models will grow in quantity and change around already existing models containing data. Document databases give more flexibility around model changes than relational databases, where ODMs such as Mongoose for NodeJS give the well-known convenience of ORMs for relational databases. At this point, I use 27 models, and at least a few more will be used soon for the new functionality I am yet to implement. 
However, I implemented a mechanism for performing and restoring data dumps in sync with multimedia content. The unzipped dump is about 1/3 size of the data prepared to be served to the users, as it contains only initially uploaded files to the system. When restored (dump can be uploaded as an archive to management panel or restored from local snapshot), media processor service recreates user-servable data by transcoding original files from the dump package. At the launch of the site, the whole content served to users weighted around 35.8 GB

Multimedia and statics serving

It was the fun part, as I wanted to process and store multimedia entirely independently from outside services, and I wanted to do it most effectively and efficiently. I needed a service for handling uploads, another one for processing uploaded files, and another (at least one) for serving processed files. The first and the last one can be simply implemented as REST endpoints, but processing takes much more time and resources, so tasks should be queued rather than hot-executed. Such service should communicate over the queue, so I implemented one (very simple, on the database) and backed it up with Redis Pub/Sub to control media processor operation and monitor tasks progress. 
For image processing, I use the JIMP package (optimization, resizing, watermarking), where for music, I currently do not process a thing – MP3 is cool. Videos, however, I transcode to HLS with FFmpeg, creating multi-bitrate streams with a separate track for audio. I can quickly reprocess any multimedia around the system from the initially uploaded file if, at any point, I decide to change something around my processing routines. By manipulating playlists, I can add captions to my videos, provide additional streaming formats (if current ones turn out to be insufficient), or replace a sound if I get a copyright complaint. Yeah, sometimes on my vids, there is some background music I was listening to while working (but I try to tag original songs and authors). 
As for serving of statics, I have ExpressJS endpoints piping the data directly from the local filesystem (however, I have S3 support implemented) to the response. Using streams makes this solution memory efficient.

Statistics

Due to the experimental character of the project, some statistical system was a must-be. Usually, the obvious choice would be Google Analytics, but I also aimed at an entirely custom-made solution (because I can). I have GA integrated, but it is more for validation (eventually fail-over) purposes than actual statistics. I try to compare the results from my implementation with those from GA. When designing the solution, I made a few assumptions, which may cause certain differences, but this is a separate post material. My system (as most such systems) uses a simple cookie, does not profile a user (I do not care for my users' profile or identity – I care for their privacy), and monitors different kinds of events for quite accurate results. I am looking to watch behaviors regarding my content, not users themselves. Please keep in mind that I do not share any data I analyze (I do not monetize your data, nor my website at all).

Administration

The administration panel is a separate application. I used the Quasar Framework here and SPA and Capacitor modes instead of SSR, compared to the main front-end application. As the panel requires authorization, there are some more differences, so the authentication system based on JWT must be implemented. Nothing fancy, but works just fine. The panel's main functionality is obviously content management, but there is also output from the statistics, messaging system, etc. all in real-time. ViewModel updates get triggered by WebSocket communication.

Content

Apart from all the technical aspects of my solution, another essential thing was content. I took a few days off before getting to writing texts for the page because I wanted my message to be as clear as it gets. I needed to approach this challenge with a clear mind. Converting a piece of code to a human-readable message would probably result in an inconveniently long text. Here I faced quite the opposite problem. The problem was to capture complex matters in a relatively simple form.
Additionally, preparation of media content (which at launch was about 12 GB unprocessed) took a few days (and I still have a lot of things unpublished), as most of my content is spread around different drives (including HDDs). It takes a lot of time to go through it all. That one night, I reviewed over 18,000 pictures to pick 75 candidates for the page, out of all.

Social functionalities

There is not much of them now, as I launched only likes functionality, but more is yet to come. I did not suddenly get lazy. Rather I wanted to reduce the "social feeling" of the page, making it more private and personal. You could notice that instead of a comment box, there is a message box. As my experiment moves on, I will collect more feedback and get a view on general reception. From there, I could make further decisions regarding development directions and priorities.
Likes, however, are quite crucial in terms of feedback. I designed the functionality to give me any valuable information I could get, so I count withdrawn likes. If a user likes some content and then decides to withdraw, that is the information I would like to receive. Withdrawal is feedback.

Marketing and monetization

Some visitors might get here from various advertising across the net and social media. Like every website, this project requires some marketing to get spinning before SEO starts to take effect and page get properly indexed. Apart from that, I occasionally test out some adverting of my profiles on specific social media. Do not take it as me seeking recognition. Sometimes I work on marketing strategies for my partners, and such experience gives me the chance to be more effective when needed. As I pay for this experience on my own, I do not feel remorse promoting my very own content.
As for monetization, I do not want to monetize this project, and I treat it more as an investment in myself and my partners. There are no ads, and all the links across the page, I reference to give the original authors credits. However, I did create a versatile piece of technology that I could use in many more applications, which is always a huge benefit.

What happens next

I will see where this experiment goes. I will focus on enhancing current and preparing new content (including more technical, featuring code, etc.) in the near future. Depending on feedback, user reports, and statistical analysis, I will decide on further development steps.