I got my first developer job almost by accident. I was not looking for a dev job back then, but while I was working doing something else, some tools I had developed as work side project got some traction and I was able to get an offer from the same company to move into a position to develop full time.
That was pretty cool! But looking back, I did things that would send shivers down the spines of security professionals, devops guys and other kinds of reasonable people. Luckily it served to teach me a valuable lesson, so let me share some of what I did, in case you find it useful.
FIRST PART
- Testing (or lack thereof)
- Deploy like it's 1997
- Monitoring what matters
Testing (or lack thereof)
When I started programming I was into Chrome Extensions. I loved the simplicity of sticking bits of DOM manipulation into a package and see them coming to life upon page load. The first extension I did professionally was interacting with an internal tool of the company and it got pretty popular, so the team using it started to request a lot of very diverse functionality.
For me, every new feature was as simple as building the correct javascript using the browser's developer console, sticking the finished bit somewhere inside the extension script and call it a day. You can imagine that after many feature requests the extension grew to be a messy monster. Calling it spaghetti code doesn't do it justice. Copy-Paste-Oriented-Development could be an alternative too.
After I left the team, the new maintainers struggled with this project. What earlier was managed fully inside my brain -not the best scenario-, was now a meaningless bunch of files with lots of undocumented functions with names that kinda made sense only for some people. Because of that, new feature requests on that extension are now on hold.
The fact that you know 100% that it works is not enough, because even in the most solitary project you can do, some day you will leave the company and the new people won't have a clue about why it should or shouldn't work. Besides that, can you guarantee that your brain will process all the possible scenarios for the feature to go through?
A test won't slow you down. What will slow you down is to waste one day of work to find out why your new feature is breaking the previous feature. Testing is not some super special technique you must learn separately from coding. Testing IS part of coding.
Helpful links:
For Javascript: I don't do a lot of JS recently, so my recommendation might be outdated, but last time I tried, I was having an easy time with Mocha
For Python: Even though Python has
unittest
bundled in, I like to use PytestFor Golang: For unit testing, you are more than covered with the standard Go testing library. For cases when you want BDD testing, you can look into Ginkgo
(If I haven't convinced you and you are still not testing your code, at the very least write detailed documentation!)
Deploy like it's 1997
Apparently nowadays is very cool to hit a button and engage a complex pipeline that will test your code, package it and deploy it. Even your infrastructure lives in a piece of code so it can be replicated.
Back when I started working with this it wasn't 1997, but in my mind it was, because my deployments consisted on connecting to a remote server and copy paste some files from my local computer. I know that FTP'ing stuff was the norm a while ago, but on top of that I used to edit the remote copy to add bits of code (which I had not saved in my local...), had credentials files laying around everywhere, and during a long time (luckily fixed after a while) the whole Python ecosystem on that machine ran off a Global Python environment without virtual envs, so services needing different dependency versions were out of luck. There was a bit of everything, a couple of web apps, some services, etc...
If you code is good enough to be shipped, ship it and don't touch it. Docker images are ideal to preserve an untouched environment. If you want to tinker with something because you forgot to fix a typo, release it again. Ideally your release should be automated so it's repeatable, but I am going to say that at the very least, release it from your local, don't do remote edits. Editing your production copy of something is a really bad idea, not because your typo fixing is dangerous in itself, but because it sets a precedent that can escalate to worse cases.
Monitoring what matters
So when you start having lots of processes up and running, some are bound to fail. What will you do when something goes wrong? So ideally you would have a nice setup to monitor what matters, like I describe in this article. Depending on where are you deploying your tools, you could pair it with a nice logging tool. I like Google Operations (formerly known as Stackdriver). Every cloud has its own and there are some other cloud agnostic tools out there.
So what did I do when my processes started to fail for the first time? Well, I had a genius technique which consisted on re-running the failed step and stare at the console to observe the crash. Naturally nothing came up because I was not logging anything. I couldn't help but starting with
print(suspiciousVariable)
. It would be some time until I upgraded to use a logging library.
You need to do that. You will be in control of a more flexible logging format and will be able to pick multiple outputs if needed. Please, do not print stuff, log it instead. And most importantly, log the relevant tracebacks and items that will help you diagnose a problem. Don't be me that time when after a failure I just logged the text "it failed". Very helpful. We could also talk about using a debugger, but that's for a future article.
If you found this interesting, come back for the second part, where I will talk about:
- Bad dependencies
- GitHub is not a storage room
- Project structure
And will also advise on something you definitely don't have to learn.