Telecommuting Tips – Part 1

As many of you know, I have recently begun living the dream, or what is otherwise called “telecommuting“.  I have worked from home for a day or two a week while employed by previous employers, but never full time.  I have noticed a bit of a difference between periodically working from home and full-time telecommute.  These articles will contain a collection of pitfalls, productivity tips, and general observations that I wish someone shared with me (or that some did share with me).

Have a clean/tidy workspace

One of the most important things for me has been to have a clean workspace.  This was something that I took for granted while working in an office that was kept clean by a cleaning staff.  When my desk gets messy, I find it hard to concentrate on the task at hand.  Here is a snapshot of my desk in what I would consider about 75% clean:

75% clean workspace
Getting over a cold/allergies, hence the cough drops.

To keep things clean, I try to spend 5 minutes of each work day tidying things up.  Also, as part of the weekly garbage pick up, I spend a few extra minutes collecting garbage that may be in my office.

Work-life balance

Work-life balance is the idea that you spend appropriate amounts of time working and not working.  This one is particularly difficult for me as my “play” time dovetails into what I do for work.  I enjoy programming in python.  I enjoy writing toy apps that consume APIs just so that I get exposure to new technologies.  I also enjoy gaming, which is less directly related to what my employer pays me for.  However, I like to setup servers/services for the games I play, which ends up flirting with what I get paid to do.  The point is, it’s hard for me to really draw a line between when I’m doing something for my employer and when I’m doing it for myself (they say if you get a job doing something you enjoy, you’ll never work a day of your life, right?).

One of the main reasons I chose to work from home is to get to spend more time with my family.  This also kind of throws a wrench into things as well.  On one hand, I enjoy the small interactions that the kids provide throughout the day.  Boden will run into the room and show me something he is proud of (usually something like throwing a ball using a new technique).  At the same time, it can be hard doing so much context switching.  I suspect this will always be something that I deal with, as it is, as mentioned before, part of the reason I chose to work from home.  Kids don’t always understand this boundary though and need some sort of reminder.  This is why I hung some doors on my office and during my work hours, I hang a sign up that the kids made me to give them a visual reminder that I’m working.  I can also tell them “daddy’s sign is up, you need to let me work” and that is easier for them to understand then just “you need to leave daddy alone”.

Daddy's not slacking, his code's compiling
Daddy’s not slacking, his code’s compiling

Reliable internet access

This one is difficult, as most cable companies have monopolies in each area market and honestly, DSL speeds really can’t compete with cable (queue the tears from the DSL fanboys).  For me, the options were Comcast residential service, or Comcast business class.  Since my employer is willing to cover some of my internet costs, I chose to go with the more expensive business class so that I have an assured 4 hour estimated time to resolution.  While a 4 hour outage would suck, it doesn’t completely negate all productivity for the day.

Along with the quicker response time to issues, business class internet removes the 250GB monthly usage cap.  Since I spend a fair bit of my work time working on things like VM images, it’s not difficult by any means for me to blow past the 250GB usage cap.  In July, I used about 375GB and in August, I used about 450GB.  While Comcast didn’t cut off my access, they have reserved the right to do so, which would have probably not made my employer happy.


The above tips are the first few steps I took towards ensuring that I am setting myself up to be productive, but they are by no means the only things I’ve done and certainly may not be important to others who may be considering a telecommute position.  I’ll work on documenting other stuff about telecommuting in the near future, but if you have any suggestions/tips/tricks, please share in the comment section below.




HDHomeRun Prime Deal has a good deal on an HDHomeRun Prime (HDHR) at the moment ( and it comes with a free wireless router as well.  I’ve been pretty happy with my HDHR and it’s assistance in cutting some cords.  I do still pay for limited basic cable which is something like $20/mo.  Since I have the HDHR, however, I am not renting other set top boxes from my cable provider and I am able to create my own DVR using the HDHR and something like TVHeadend or MythTV-Backend.

I’ll do a more thorough post on my TV/entertainment setup in the future, but if you are looking to possibly save some money, the deal Newegg has on the HDHR Prime right now is going to be hard to beat.

Project Update: Pimometer


Found some time to work on pimometer this weekend.  Made some decent progress.  Our original goal included leveraging a custom built django API, but upon further examining of our needs, we decided the MongoDB API was really all we needed.




We now have a working WebUI (though it is a bit minimal at the moment), a simple daemon to run server side, and a couple of scripts to populate the database with data.  We still need to finish writing the probe driver (we’ve gone with a Yocto-Tempurature board for the Raspberry Pi) for our code to actually pull data from the hardware probes, but that will actually be a very small effort (since the existing libraries are quite sufficient).


pimometer-vagrantupI’m pretty happy with our current development workflow (even though it isn’t working for Michael Beck at the moment).  If you have ever used Vagrant before, the workflow should be pretty familiar to you. Essentially, we hack on code in the git clone and then do vagrant up/vagrant rsync.  After about 90 seconds (for vagrant up; for vagrant rsync, it’s almost immediate), our code is running live in a VM managed by vagrant.



Next on the hit list (in a very rough ordering):

What to do with a broken SGS3

My wife and I both have Google Nexus 5 phones (maybe I’ll write more about that decision later) and couldn’t be happier.  However, prior to our current setup, we had a Samsung Galaxy S3 (SGS3) we passed around for a while until the screen broke:

Kids are tough on phones...
Kids are tough on phones…

Beyond the screen having cracks in it, everything works perfectly fine (even the touch sensing bits).  So, the question now is, what do I do with this hardware?  The current ideas floating around my head are:

  1. Setup a Kodi (formerly XBMC) server, replacing my current full sized tower chassis that is in the living room
  2. Make it a little more safe and give it to my daughter going into kindergarten in a locked down mode so I can keep tabs on her
  3. Set it up as a workstation for my children to play with
  4. Somehow tie it into my pimometer project

Anyone else have any great ideas?

Python vs Ruby

TL;DR: I like python (mostly because I’m fairly proficient with it) but have a lot of respect for ruby.  However, the two languages are actually really similar at lower levels.

This might be a less typical “vs” article.  Really, I don’t enjoy language wars.  I believe that each language is a tool that solves particular problems well.  There are also some higher level things to consider like the value of consistency in a project meaning that if a project has 95% of it’s code in <insert horribly offensive programming language>, then writing the remaining 5% in <insert programming language of choice> actually clashes with the rest of the project, making the code difficult to support.  So, here is my pros/cons list for ruby and python as of this date (technology changes fast, so these could be wrong by the time I’m done posting).


Style guide compliance ensures similar coding styles, which makes it easier to read
Massive/fundamental change is hard. Python 3 is having a hard time getting a foot hold.
White space:
Using white space for code blocks is pretty natural and easy for editors to assist with
White space:
Using white space for code blocks can make spotting start/end of code blocks less obvious to the human eye
Easy method for obtaining code shared with the community
Having a Sys Admin background, I despise multiple package managers.


Python has unittest, but I find rspec to be more user friendly (possibly due to lack of experience).
Style guide:
There really are no tools for programmatically checking style guide.
White space:
Using white space for code blocks is pretty natural and easy for editors to assist with
White space:
Using white space for code blocks can make spotting start/end of code blocks less obvious to the human eye
Easy method for obtaining code shared with the community
Having a Sys Admin background, I despise multiple package managers.
Ruby on Rails:
Having supported Ruby-on-Rails, I know how horrible it is (was?).


As it is pretty apparent, both languages really are pretty similar.  So similar, in fact, that if you take the bytecode of each interpreter, you can reconstruct the code in the opposite language.  Given the animosity I’ve seen in chat rooms, hallway conversations, and various comments on webpages, it is fitting to call this act of crossover unholy.

Obviously, there are a lot of things I did not cover (quantity of libraries available, quality of libraries, overall adoption of language, career opportunities for each language, etc.).  The items above are the key things that are effecting my decision today to lean towards python instead of ruby, given everything else is equal.  That being said, I find myself being thrown into ruby pretty deep with projects like puppet and vagrant being at the center of my world these days.

Closing Thought

When all you have is a hammer, every problem is a nail.  But when you have a full arsenal of tools, you can get stuck in analysis paralysis trying to decide what tool to use to solve the problem.  I’d much rather have more tools in my toolbox than I know what to do with than to not have the required tools when I need them.  Probably time to learn some more languages (like go or C++).

Puppet git hooks

Since I have noticed an uptick in interest in my puppet-git-hooks, I thought I should dedicate some time to explaining myself (also, this is the first time anyone has ever written a book about anything that I’ve done).

The Goal

The highest level goal for these git hooks is to provide a programmatic mechanism for validating puppet code upon git commit/push.  A slightly lower level goal is to provide feedback to contributors to puppet projects and ensure that the code is in a good state and that style guides are being followed.  An even lower level goal is to use the same hooks for both client and server-side checks and wrap the logic around how git commit works vs how git receive works.  At the time of this writing, these hooks will test the following:

  1. Puppet manifest syntax
  2. Puppet template (erb) syntax
  3. Puppet manifest style guide compliance
  4. YAML (yaml, yml, eyaml, eyml) syntax
  5. Rspec-puppet tests

The Layout

If you don’t care about how the sausage is made, you can skip this section.  I will be detailing the directory and code layout and important workflow/codeflow points pertaining to these git hooks.


The pre-commit hook does what it sounds like: it’s the hook that runs on the client-side during the “git commit” process, but before your commit is actually staged.  This allows for the local commit to be denied before it is in history, which can be a little cumbersome to modify (at best).  Currently, the only asymmetric test between the server-side and client-side hooks is the rspec tests (more about rspec-puppet can be found here).

The git client looks for an executable file called “pre-commit” located in <git dir>/.git/.  If found, the git client will call this file and pass in it some git references (details).  Once the details are sorted out, this file iterates over the changed files and executes the shell scripts located in the commit-hooks directory.  All the tools required for the scripts in commit-hooks must be present on the client in order for these hooks to be successful, which means you might need to apt-get install a few packages..


This hook is run on the server side.  There is a fair bit of communication between the client and the server. In one of the steps, the client hands the changeset to the server.  The server must then verify acceptance of this changeset.  This pre-receive hook runs after the client hands the server it’s changeset and before the server tells the client that it did indeed accept the change set.  This is useful to centrally enforce standards (it also assumes you are using a central/canonical source for your git workflow).

Since these are all run server-side, all the tools required for the scripts in commit-hooks must be present on the server.


This is the meat and potatoes of the actual puppet specific stuff.  There is logic in the pre-receive and pre-commit hooks to determine if a hook should be called (mostly by checking the file extension) and prepares/formats the data in a manner that these scripts are expecting to see it.  Once everything is ready, pre-receive or pre-commit will call the scripts located in this directory, usually with a single parameter being the filesystem location of the file the script is testing (for example: /tmp/foo.pp).


Git hooks are hard.  Which is why you should reuse work that someone else has already done.  And if these hooks don’t work for you, feel free to provide a patch (if you are comfortable doing so) or even just open an issue with a bug or feature request.

Latest Project: pimometer

The Birth of pimometer

I don’t have too much to report on this yet, but I’ve been working on a Raspberry Pi project to monitor the status of a long BBQ or smoke out.  It’s under the current working title of pimometer.  Various design points are still being talked through, but I think we have the basics mostly understood or actually coded.

One night after a Dota 2 gaming session, DryGravyTrain (aka Gravy) and I started up a Scoot & Doodle session in our Google Hangout to talk through some of the design points.  It kinda got a bit into the brush after about 5 minutes.  Here was the product of our discussion:


You needn’t worry about all the details or subtleties.  It was late.  We were tired.  It just barely makes sense to even me.

A few more details

Now that we have the silliness out of the way, here are some of the current features we’d like to implement (or design goals we intend to hit):

  • Work entirely in an offline mode
    • If an internet connection is unavailable, pimometer should still be entirely functional
    • If features do require an internet connection, they should recover gracefully after an outage
    • Self serving web interface
  • Mobile and desktop support
    • We’re currently working on a web interface that works on both mobile and desktop
    • Would like to create a native Android app
  • Cloud enabled
    • Currently working on a cloud based service for data storage/analysis
    • Considering leveraging other cloud services (Weather Underground, AllRecipes, etc.)
  • Flexible data
    • Allow for as much or as little data as users may want to include (spices, altitude, additional sensors, etc.)
  • Alert when the temperature is too high/low
    • Allow the user to set a high/low temperature to be alerted at (via a push service, hopefully)
    • Support changing the temperature during the smoking/bbq session
  • Data analysis
    • Comparison of previous cooking sessions
    • Comparison to other users cooking sessions
    • Time/date/weather/proteins/etc.
  • DIY or assembled kit
    • Offer an assembled kit for a good price (sub $100, hopefully)
    • Provide instructions for DIYers

I’m sure we’ll whittle this list down for a version 1.0 and make milestone for additional versions.


Gravy and I plan to have a mini hackathon soon.  We expect to be have a working product in a couple of weeks.  The most up-to-date information will be on the GitHub page.

Why I Hate: Java – Library Management

There are many things I hate (or dislike, or currently have a bone to pick, etc.).  I thought that Java would be a good first target for a post.

Maybe my lack of any formal training is hindering me here, but I find trying to leverage public libraries (take google-guava for example) in Java the worst.  From my experience, you have to actually pull in the code for the library and ship it with your code.  I suspect this is an artifact of the whole JVM concept, but that isn’t a valid excuse, in my opinion.

There are a few issues that arise from this.  The first and foremost issue is that your application bloats like crazy when you want to make use of a single call in a library.  You are also now for patching your software if a dependent library has a security hole.  When developing, you need to actually copy down all dependent libraries in order to have a working copy of the code (or use something like maven or ant).

There is either something I’m missing, or this system is horribly broken (dynamic vs static compiled binaries)?  I thought we were kinda done with statically linked libraries except for extreme exceptions (like embedded systems).