(Re)Learning Elixir

I’m writing a small app in Elixir and Phoenix to better organize data from the Texas Parks and Wildlife Draw Hunt system. In theory, this will allow me to build a calendar view to see when hunts are in a more systemic manner as well as track hunts that I’ve applied for along with the results.

Some day, I’m sure that I’ll work on a personal app that doesn’t involve HTML scraping but this isn’t the day. I could have probably just built a small UI to input data manually and been done in about the same amount of time but I’m learning a ton about the Enum module in Elixir along with the API of Floki. So it’s mostly a win-win though slow going.

Today’s lessons revolved around two main blocks: how to pass a module’s function to functions in the Enum module that require it and the behavior of Mix tasks as it relates to the entire application. Writing these down here in hopes of better remembering them in the future.

For the Enum and module functions, I knew this from a video course I took by Pragmatic Programmers. You have to append your module function with the ampersand sign to make it work along with passing along the arity of the function you are calling as seen below in the 3rd line. If you try to pass it as just the function name, you’ll get a complaint from the compiler that the function doesn’t exist.

 def scrape() do
    hunts = get_hunts("ADE")
    Enum.each(hunts, &parse_baglimit/1)

The second issue was the behavior of Mix as it relates to the entire application. I’m writing a custom Mix task to scrape the hunt data and what was working perfectly well in my iex console failed miserably in Mix. That’s because Mix does not load the entire application in the same way that iex will. So dependencies (in this case the hackney dependency of HTTPoison) will not be loaded. Adding a line to ensure the App is loaded makes this work as in line 2 below.

  def run(_) do


Stackoverflow and mix

Stackoverflow and function passing

Software Gambles

Bear with me, this isn’t going to sound like a software essay for a little bit. But trust me, I’ll get there.

Ask a random sampling of people who know me at a level somewhere north of “mere acquaintance” what one word they would use to describe me and I’d bet at least 30% of them said “gambler”. I’m not going to get into the details of why that might be the case here on the public internet but the moniker might be warranted based on certain extracurricular activities. On the surface, this seems weird because I’m not much of a chance seeking, thrill riding enthusiast in regular life. But I do love action when it comes to football games, casinos and golf matches. Early on in my gambling career, there were lots of losses and not so many wins. Gambling, like any other skill, involves some experiential instruction that can’t be readily gained from reading about it on the internet. But there’s a dirty little secret to gambling that most people on the outside looking in don’t understand. Good gamblers rarely take risks on the unknown or combinations of bets because they know as a general rule, you have a much higher chance of going broke when you do. The fat tail of gambling failure is similar to the old adage about the stock market: it can stay irrational a lot longer than you can stay solvent. Betting on 10 games on Sunday is a fast way to go broke unless you have very real, very hard empirical data that says you can win 56% of the time (and that’s all it takes to be a very successful sports bettor which is a shocking fact to many people and one reason why you should never, ever trust someone who is trying to sell you picks that claim greater than 57% winners. If they could really pick ’em at that clip, they wouldn’t be selling picks).

Let’s say you really can pick football (or basketball or whatever) winners at 55%. Let’s say you have a $2000 bankroll and you bet the recommended amount of 5% of your bankroll on any given bet. If you bet one game on Sunday, you have a 55% chance of winning $100 and a 45% chance of losing $110 (the extra $10 is the service charge the book extracts, that’s another post all to itself), all else being equal, for an expected profit of $5.5. Sweet, we’re going to be rich! Seems like we should be as many games as we can then, right? Well, no. For one thing, chances are you don’t actually pick at 55%. You pick 55% right on games you fully understand and that you have studied. Others, you might not have a clue about. Also, even if this is a very normal distribution AND if you actually do pick every game at 55%, there is a chance you will lose every single game over the course of 2 weeks and go broke. The chance is astronomically low but it exists. And that’s why most professional gamblers don’t bet lots and lots of games every weekend. Limit your risk by taking singular and calculated gambles that you control for.

What does this have to do with software? This essay on building stable systems contains a treasure trove of important ideas for developing good software but one that stood out to me was this paragraph:

A project usually have a single gamble only. Doing something you’ve never done before or has high risk/reward is a gamble. Picking a new programming language is a gamble. Using a new framework is a gamble. Using some new way to deploy the application is a gamble. Control for risk by knowing where you have gambled and what is the stable part of the software. Be prepared to re-roll (mulligan) should the gamble come out unfavorably.

Note the intersection of ideas between actual gambling and gambling on your software projects. Limit your risk by limiting your gambles. At work, I’m currently involved in a high-priority project that has the potential to shift the types of products we can offer our customers substantially. It’s actually been on the books for over two years with fits and starts but finally has the political backing to get it done. Now to me, a high priority, high visibility project like this is in and of itself a gamble. On top of that, this particular project is different from our current set up in a few important ways which increases the risk. That alone should be enough to say: “let’s not introduce any more risk into the project.” Instead, for a variety of reasons both political and technical in nature, we are attempting to deliver this project using a new communication framework (RabbitMQ), integrating a new database (Couchbase), monitoring it using a new stack (ELK), deploying it using a new tool (Octopus Deploy) and possibly utilizing an offshore team in Russia. As exciting as all that sounds technically (except for that last part, that gives me nightmares), it seems to me a project fraught with risk. If our chances of success for the project doing just one of those things is somewhere in the realm of 80%, the chance of getting them all right is tiny. Our best case scenario in a probability function is that each event is unrelated (this isn’t necessarily true if some of the probabilities are related and work in each other’s favor, see Bayes’ Theorem but I seriously doubt implementing RabbitMQ is going to drastically increase the success rate of a Couchbase implementation). Instead of limiting our risk, this project is taking on scope like the Lusitania took on water.

None of this means the project will be a failure. But what it likely means is that many of the gambles added to the project will result in poor implementations that hurt our chances of success in the medium to long term. This is not the way to build a stable system. So how do we manage the risk? One is to push back on all the technological scope. This is possible but difficult in an environment where there are competing interests above and beyond the success of the project. Delivering X is great for the company but delivering X with Y new technologies is better for N number of teams. Saying no means some teams have their darlings at least pushed off into the future if not killed. The problem with this is that my team doesn’t control all these decisions. Another way might be to utilize one technology (RabbitMQ for instance) to ease the risk of another one (Couchbase. By doing database writes via a queue, we could write to both the new and old database to ensure success). This is something the team does have control over and that we will probably implement. Another way is to leverage the expertise of other teams/people for particular pieces (DevOps controls Octopus). But each of these are just Band-Aids on the larger wound of too much risk in a single project.

The right way to have a successful project and move towards a stable system is to bite off only as much risk as you can hedge. Each of the tenets in that essay can be used to build a stable system but it involves engineering discipline and political understanding to get there. If you watched the Republican debate tonight, you know political understanding is a dying characteristic in our society. In the interim, the best I can do is protect the team from the risks to the best of my ability and let strong engineering rise to the top. And hope that the next big bet I make only includes a single gamble. I may or may not like the Steelers at 10 to 1 to win the Super Bowl next year. 🙂

Ruby Arrays of Objects and Unions

I’m working through the Advent of Code and needed to union two Ruby arrays of objects together based on some properties on said objects. I wasn’t having much luck getting it to work and my Google fu was failing but I finally figured out the issue and want to post it here in case someone else ever manages to search using the right terms.

The key here is that you can’t just override eql?. You have to also override the hash method. So for a Position class, it might look like this:

class Position
  attr_reader :x, :y

  def hash
    [@x, @y].hash

  def initialize(coord=[])
    if coord==[]
      @x = 0
      @y = 0
      @x = coord[0]
      @y = coord[1]

  def eql?(other_object)
    @x == other_object.x && @y == other_object.y

This would allow the union of two arrays of Position objects to only include Positions that are unique by X and Y coordinates.

Setting Up My Development Machine

Two weeks ago, my laptop started crashing randomly. Without going into all the gory details, I took it to the Apple store and they wiped the drive to try and update the OS. Turns out it was a bad stick of RAM so now I’m in the position of rebuilding my dev machine. This is probably a good thing because this is the same machine from 2010 that I started my Ruby-Javascript-Clojure journey on and it was an evolutionary process with lots of genetic dead ends. So this is just a frame of reference for the future when I need to do this again with a brand new machine. Almost everyone of my regular readers can go right back to whatever they were doing before they landed here.

1. install latest OS, currently El Capitan.
2. Install XCode
3. install Homebrew
4. install rbenv: brew install rbenv
5. install macvim: brew install macvim
6. install rails: brew install rails
7. Install postgresql: brew install postgresql
8. Install heroku toolkit and xcode developer kit
9. Install The Ultimate Vim Distribution
10. Drink more coffee

At this point, I’m able to work on my main project in my main toolset which is rails. I can deploy to Heroku, run rake tasks and not dread all the manual work I had to do when my laptop was dead. There are still a bunch of things to get back up and running full time (install Clojure, bring all my code back from backup, install Scrivener, write long blog post about checking the easy things like RAM first when your computer starts to wig out, etc, etc). But thanks to homebrew, setting up a dev machine in 2015 is infinitely easier than it was in 2010. Not to mention, I half way know what I’m doing now.


On Fences

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.

G.K. Chesterton

There is a tendency in all of us to tear down fences built by others for reasons we do not immediately understand. It takes real effort to put ourselves in others’ shoes and divine their intentions. David Foster Wallace called it our default-setting, a natural instinct to self-centeredness. Instead of understanding someone else’s reasons, we prefer to assume the worst or the uselessness of people, the things they have built, the situations they are in. When someone cuts us off on the road, we fume and yell and assume he is an asshole. Perhaps he is. But perhaps he is late for a job interview because his kid had the flu and the driver has spent all morning at the emergency care office. Perhaps his sister is at the hospital having her first child. Perhaps a million other things. It is so much easier to say “you asshole” because we operate in a default setting of self-centeredness. Understanding other’s reasons and situations requires effort and empathy, an emotion on a long, slow, steady decline in our world of connected disconnectedness.

This happens in my line of work where someone will approach a piece of software and think “This is stupid and makes no sense. I will clear it away and build something else.” This is an expression of the same self-centeredness. It is easier to assume our omniscience than it is to see both (or ten!) possible sides of a problem. Instead of asking “Why is this fence here?”, we refuse to put in the work required to achieve understanding. And yet, there are always reasons why something was done the way it was. True, they may be as simple as “we didn’t have enough time to do it right.” But there were reasons. We ignore them at our peril because if those reasons still exist, we will tear the fence down and rebuild it exactly as it was, stone upon mistaken stone, until we have recreated the problem different in author only.

In software, this results in a constant reinventing of the wheel and an inability to maintain that which was created. A car has thousands of moving parts and requires regular maintenance to continue operating in decent shape. This is a fact no one will dispute. Few people in the software industry are so enlightened, often believing that maintenance is lost dollars thrown away at the expense of creating something new. But just like with cars, maintenance dollars spent now prevent massive breakdowns or entire replacements in the future.

Part of the problem is the infancy of our industry and no clear guidelines on when we should change the oil and rotate the tires in our applications. Part of that problem is that unlike in a car where the tires are in the same place and changing the oil is a straightforward operation, there is no scheduled maintenance manual for our software. Maintenance is fraught with the danger of breaking some unknown piece buried off in a rat’s warren of complexity. Imagine if one out of five times you changed the oil in your car, the brakes stopped working. Or maybe it caused the heater to always be on. You would acquire a certain hesitancy towards regular car maintenance because the last time it caused you to crash into a tree. Yet this is exactly the situation most software is in today. It is a live, moving, functioning system that needs maintenance just as badly as your car and yet changing pieces of most software systems may cause it to crash into a tree immediately upon leaving the garage.

Hearing this, you might wonder why it’s called software engineering. You wouldn’t call someone who built a bridge that had to be replaced in five years and couldn’t hold trucks over 2 tons after two years a civil engineer. There are places that are truly doing software engineering. Google, Facebook, Amazon. These places are doing engineering. But a huge chunk of the software out in the wild today is about as close to engineering as the pine box derby car you built in Boy Scouts. Because software is rarely engineered, it is rarely maintained. A bridge gets built and then over the years is examined by independent authorities and is resurfaced and is kept up but rarely added onto. Few bridges ever get a second deck or a helicopter pad or a brewery. A piece of software is built and may have any of these metaphorical things or more bolted onto it through the course of its lifetime. What we do is much closer to art and the maintenance of our systems is much closer to restoration than it is engineering. Restoring a painting is painstaking, delicate work completely unlike changing the oil in a car. You need a restorer’s touch to maintain most software systems along with a strong fear of failure.

The reasons for this are legion. Your average everyday software developer is qualitatively different from your average everyday Google software engineer. This isn’t a slight, just a fact of the bell curve. Also, most business software is a living, breathing, evolving thing, the opposite of what a bridge or a dam or an electrical circuit is. There are only about 3 ways to interact with a bridge and one of them involves removing yourself from the gene pool. When we engineer a bridge, the inputs into the system are known and finite. With software, the inputs are typically unknown and approaching infinity. This isn’t just at the user level but also at the design and requirements level. Things change all the time during development and just when you think you have a firm grip on what this particular piece of software is supposed to do, you’re asked to make it peel a banana or change a diaper. And suddenly you have a mess on your hands. When it gets pushed out to users, it only gets worse. A piece of software is constantly evolving and like our DNA, that evolution results in duplication, left over junk and vestigial appendices. It’s also why that piece of code you don’t think actually does anything never gets cut off. We don’t delete possibly unused code any more than we do elective appendix surgeries.

Which brings us back to that fence. The next time you run into a piece of code that you don’t understand or doesn’t make sense or you want to call stupid, remember that people with above average intelligence wrote it. They probably didn’t want it to be a pile of crap. They probably genuinely thought it was going to be fantastic because if they are nothing else, developers are optimists. But through forces of nature and evolution and shitty requirements, the perfection they were hoping for turned into something else. Perhaps it was intentional and you don’t understand the intention yet. Perhaps it was a Friday evening and they were fixing a production bug. Or perhaps they just weren’t very prepared. But give them the benefit of the doubt before you take a sledgehammer to their fence. Chances are that at some point someone will look at a fence you built and wonder how you could have been so stupid. Instead go away and think for a bit. Only when you can come back and understand the fence may you possibly tear it down.

Upgrading A Reasonably Big Ruby Site Part 1 (of at least 1)

I’m in the process of upgrading The Sports Pool Hub to Rails 4.2 loosely following this advice As part of this, I’m upgrading all gems in my bundle file. Rspec has taken the most time as I apparently haven’t done much with it in quite awhile and large breaking changes in syntax have occurred specifically around the matchers like have_selector. My controller tests have a lot of view testing in them which is probably a big smell. When I built the site, I followed Michael Hartl’s tutorial and he was doing tests around the markup in controller tests at the time so I fell into that habit.

When I first tried the upgrade last year, Guard told me I had 327 broken tests out of 541. This caused me to check everything into a branch with a commit message of “this is never going to happen”. Six months went by and I quite happily didn’t worry too much about it. However, somewhere along the way I remembered this site actually has potential if for no other reason than to increase my ability to create something cool. Also, I built the Cry Havoc Theater in latest Rails and realized that a lot of the Rails world had passed me by on this site. So with only baseball to amuse me, I decided to try once again to upgrade.

As it turns out, those tests are largely easy to fix and revolve around two main types. The first is that some syntax had changed in the matchers as mentioned earlier. Changing a test like this:

[ruby]response.should have_selector("td", :content => "Survivor")[/ruby]

to code like this:

[ruby]expect(response.body).to have_content("Survivor")[/ruby]

is straightforward if tedious. Note that because of my occasional OCD, I also migrated from the old “should” syntax to the newer and apparently more hip “expect” syntax throughout the test suite. This caused my bourbon intake to increase slightly but not noticeably. I also upgraded all places that were looking for generic inputs like links and fields using “have_selector” to the more specific matcher “have_link” or “have_field”. This cleaned up the code considerably.

The other major type of test that changed were the ones that verified records were being saved to the database using the old lambda-do-end syntax. They looked like this:

[ruby]lambda do
post :add_pool, :id => @site.id, :include_weekly_jackpot => "1",
:current_week => 1, :current_season => ‘2011-2012’, :weekly_jackpot_amount => "1",
:pool => {:type => ‘PickemPool’}
end.should change(Jackpot, :count).by(1)

These tests weren’t broken exactly but if you’ve been reading along carefully, you know I have OCD issues when it comes to things like deprecation warnings which this test throws. So I wanted to get everything nice and clean. I had a little trouble tracking down what to do with these tests short of deleting them out of desperation. Somewhere, I stumbled onto the matching expect syntax though:

[ruby] it "adds weekly jackpot to the table" do
post :add_pool, :id => @site.id, :include_weekly_jackpot => "1",
:current_week => 1, :current_season => ‘2011-2012’, :weekly_jackpot_amount => "1",
:pool => {:type => ‘PickemPool’}
}.to change(Jackpot, :count).by(1)

Ah, much nicer.

This is where things currently stand. My site is woefully short on functional tests so I may write a few of those before I do the final code commit but so far, things haven’t been too bad.

Google Apps Email Aliases

This is mostly a reminder for myself in six months when I need to do this again. Mara has an email on cryhavoctheater.org domain that is managed in Google Apps. I wrote about setting it up with Amazon Route 53 previously. Today, she wanted to also have info@cryhavoctheater.org set up. Turns out, this is as easy as creating an alias with Google Apps under her user account. I assume she can manage as many of those as necessary and it’s way cheaper than creating a new account every time.

In other news, I’ve added a subscribe functionality to this site so that if you don’t want to check in every six months to see what I’ve written, you can sign up to get email notifications. I promise to never do anything with your email other than send you those notifications. I have a post in mind comparing Uber to Walmart so you should sign up now so you don’t miss it.

Hosting A Site At Heroku With Email at Site5 (or any other mail provider probably)

My wife’s non-profit theater website, Cry Havoc Theater was created by moi after we had a temporary dalliance with WordPress. We bought a really nice template but couldn’t figure out how to make it work well with her logos. So I built a site on sunday using Rails and Bootstrap. I hosted it on Heroku using Amazon Route 53 DNS which is my standard now that Zerigo has started charging a lot more money. This costs me about $1.11 a month per site.

The kicker to this story is that I had never bothered to set up email on The Sports Pool Hub because I just use a non-branded Gmail account (and no one has ever signed up so it’s a moot point anyway). Because we’d originally created and hosted the site at Site5 (which I totally and whole-heartedly recommend by the way for any WordPress hosting you might want to do), I had originally set up her webmail there. When she tried to log in the other day after I’d moved to Heroku, that clearly didn’t work anymore.

After spending a little time googling and thinking about the solution, I learned about MX records and I figured I could still host the mail at Site5 while the host was at Heroku. Unfortunately, there was precious little documentation on how to do that. The fantastic support team at Site5 were quick to respond to my request and after 10 minutes of configuration, I had mail back up and running at Site5. This is a reminder/tutorial for anyone else out there wanting to do the same thing. You will need the mail subdomain from your host (ours was mail..org but yours may be different). You will also need the IP address for the domain.

In your AWS Dashboard, go to Route 53. Click on Hosted Zones and then go to the recordset for the zone/domain that you want to route email for. Create an A record for the mail subdomain with a value of the IP address that you received from your host. Site5 told me to set the TIL to 1 hour so feel free to choose that. Save the record.

Then create an MX record with the same Name as the A record you just created. I made the TIL the same as the A record and set the Value to “10 mail.cryhavoctheater.org”. This sets the priority the routing goes through. With only 1 option, 1 would have been fine but most of the examples I’d seen chose 10 for later flexibility. Save the record set.

Then at Site5 (or wherever the email hosting is happening), in SiteAdmin, you need to edit the MX record for the domain in question. It was originally set up as a root domain, cryhavoctheater.org. It needs to be whatever the mail subdomain was from steps 1 and 2 above. In this case, mail.cryhavoctheater.org. After propagation, which in our case was immediate, pinging your mail subdomain should result in the IP address provided by your mail provider. You can then go to /webmail and login.

The hardest part as with anything new is just figuring out the details. The implementation was pretty straightforward.

Update Notes from RVM

I figure these might come in handy and then I’d have no idea how to find them.

In case of problems: http://rvm.io/help and https://twitter.com/rvm_io

Upgrade Notes:

* WARNING: You have ‘~/.profile’ file, you might want to load it,
to do that add the following line to ‘/Users/osiris43/.bash_profile’:

source ~/.profile

* Zsh 4.3.15 is buggy, be careful with it, it can break RVM, especially multiuser installations,
You should consider downgrading Zsh to 4.3.12 which has proven to work more reliable with RVM.
* RVM comes with a set of default gems including ‘bundler’, ‘rake’, ‘rubygems-bundler’ and ‘rvm’ gems;
if you do not wish to get these gems, install RVM with this flag: –without-gems=”rvm rubygems-bundler”
this option is remembered, it’s enough to use it once.

* RVM will try to automatically use available package manager, might require `sudo`,
read more about it in `rvm help autolibs`

* If you encounter any issues with a ruby ‘X’ your best bet is to:
rvm get head && rvm reinstall X –debug

* RVM will run ‘rvm requirements’ by default, to disable run:
echo rvm_autolibs_flag=0 >> ~/.rvmrc

* RVM 1.20.12 removes the automated –progress-bar from curl options,
if you liked this then you can restore this behavior with:

echo progress-bar >> ~/.curlrc

* RVM will set first installed ruby as default and use it if run as function.
To avoid this behavior either use full path to rvm binary or prefix it with `command `.

* To update RVM loading code run ‘rvm get … –auto-dotfiles’

* RVM 1.20 changes default behavior of Autolibs to Enabled – if you prefer the 1.19 behavior
then run “rvm autolibs read-fail”, read more details: rvm help autolibs

* RVM 1.24 changes default package manager on OSX to Homebrew,
use `rvm autolibs macports` if you prefer Macports.

* RVM 1.24 changes default `–verify-downloads` flag to `1` you can get the paranoid mode again with:

echo rvm_verify_downloads_flag=0 >> ~/.rvmrc

* RVM 1.25 disables default pollution of rvm_path/bin, you still can generate the links using:

rvm wrapper ruby-name # or for default:
rvm wrapper default –no-prefix

* RVM 1.25.11 ‘rvm remove’ will by default remove gems, to remove only ruby use ‘rvm uninstall’

Mavericks Upgrade and Postgres Pain

Just like the last time I upgraded my OS a mere 3 months ago (though after this week, it feels like a lifetime), when I upgraded to Mavericks last weekend so that the App Store upgrades would shut up about it, my PostgreSQL installation got hosed up. It took until today off and on to figure it out. It was similar to last time but this time I knew my Homebrew installation of PG was correct. I finally found the answer here in the comments to a previous post about a similar problem. I had to symlink an existing socket file into the location the homebrew installation was expecting it. If that sounds like I know what I’m talking about, you can forget about it because I have no idea why the socket file was in the wrong place. Still, it fixed it immediately.

I write about this only because at some point, when some other OS upgrade message beats me down, I’ll upgrade and face the same thing again. Or maybe a new problem. Who knows.