Timelapse system: a start

Everyone that’s interested in technology will at some point face the same struggle: how do I apply all these interesting technologies in my own projects.

This is really the start of a system that (at the moment of writing) stores over 750 thousand photos occupying 6.2 terabyte of storage and every single minute this grows.

In the following comparison you can see a picture taken just after I started this project and one that has been taken on the day of publishing this piece.

Times are changing

I’m living in an area that’s actively being developed and for some reason I can’t let go of the past. I want to keep track of how the area is changing and feel nostalgic on “how thing once were”. This is why is decided to put a camera on my balcony, with a Raspberry Pi and a Cronjob to take a picture every single minute using gPhoto2. The pictures would be saved on a simple NFS share backed by 2 times 1TB hard disks in raid 1. That was it. Anyone working in IT will immediately see a couple of problems with this approach, just to name a couple:

  • Reliability
  • Monitoring
  • Observability


As I was not actively looking at the pictures it would happen multiple times that the Raspberry Pi was hanging or that for some reason the camera was not responding ending up with thousands of hanging Cronjobs. Also unreliable Wifi from outdoor made NFS very sad. That had to change, it should just work, even if one of the components is not working.


As mentioned in the previous paragraphs I was not aware of these problems. It was just (not) running and that is not great for a project that tries to capture a changing landscape one minute at a time. The moment there’s a problem with the setup I want to know about and preferable it should recover automatically (by for example resetting the power of the camera).


Taking a photo is one thing, but there’s so much extra information that I was not aware of. Some of the information is obvious, for example the ISO and exposure, which is stored in the Exif portion of the file. Other information that I wanted to store was: Outside temperature and relative humidity, how much time is spend on the camera, how reliable is this process, how long does it take to write the file to storage, and how much energy does all of this consume!

How to get there

In the blogs after this I will write down exactly how I got to a system that I almost never have to touch. How videos are automatically created daily. How the camera and controller is able to recover from almost any failure, how data is stored and replicated to the other side to the world.

So if you’re interested in Kubernetes, Ceph (and its S3 compatible API radosgw), Django, VictoriaMetrics (and metrics in general), Home Assistant, and many more (don’t forget the hardware). Come back soon to see how all of these things are combined to create a system that allows me to take reliable pictures that I plan to store forever!