Game of Beer

Conway’s Game of Life

The Game of Life is a cellular automaton devised by a British Mathematician, it requires no players and depends only on an initial seed state. Like many other games there are rules:

1. Any live cell with fewer than two live neighbour cells dies.
2. Any live cell with two or three live neighbour cells live to the next round.
3. Any live cell with more than three live neighbours dies.
4. Any dead cell with three neighbours becomes live, this is like reproduction according to wikipedia.

There are a number of patters associated with GOL, some are static, others are dynamic. A couple of examples are the beehive (static)


the boat (static)


and the blinker (dynamic)

-----  -----
--x--  -----
--x--  -xxx-
--x--  -----
-----  -----

Like the Roman Numeral problem [https://biomunky.wordpress.com/2014/03/22/roman-numerals-the-default-dojo-option/], the Game of Life can frequently be found scrawled on the coding dojo whiteboard, indeed I’ve tried the problem before at one of the London Clojure Dojos [https://github.com/biomunky/ldnclj-dojo-team-5-game-of-life].

An attempt with Scala

Since the problem is generally about cells, my starting point is just that, a cell:

case class Cell(x: Int, y: Int)

The cell is the center of our world! It contains an x:Int – the row – and y: Int – the column. For every cell in our gamewe need to be able to get its neighbours, for this we define a function … let’s call it neighbours. Before we do this, we should sketch out what we want.

Given a simple 5×5 board where the central cell [2,2] is alive:

- - - - - 
- - - - -
- - X - -
- - - - -
- - - - -

what’s going on around the center cell? We can redraw the grid and make it look like a simple list of lists …

[0,0] [0,1] [0,2] [0,3] [0,4]
[1,0] [1,1] [1,2] [1,3] [1,4]
[2,0] [2,1] [2,2] [2,3] [2,4]
[3,0] [3,1] [3,2] [3,3] [3,4]
[4,0] [4,1] [4,2] [4,3] [4,4]

If we focus in on 2,2 we have:

[1,1] [1,2] [1,3]
[2,1] [2,2] [2,3]
[3,1] [3,2] [3,3]

If we look hard at this it becomes clear that there’s a certain pattern in the surrounding cells

[-1,-1][-1,0][-1, 1]
[ 0,-1][ 2,2][ 0, 1]
[ 1,-1][ 1,0][ 1, 1]

We can use this to create our neighbour function – my initial version looked as follows:

def neighbours(cell: Cell): List[Cell] = {
	val positions = List(-1, 0, 1)
	positions flatMap (x =>
		positions.map (y => Cell(cell.x+x, cell.y+y))
	) filterNot (_ == cell )

The function takes the cell of interest and calculates the neighbourhood, once all the cell positions (including the ‘cell’ position) we filter out our input cell. So that’s flatMap map and filterNot! Too many things in one place, it’s probably possible to clean that up.

Yup, it is. Rather than using the 3 functions we can use a for comprehension with a filter to only yield the cells we want. This simplifies the previous function to:

  def neighbours(cell: Cell): List[Cell] = {
    val positions = List(-1, 0, 1)
    for (i <- positions; j <- positions; if Cell(cell.x + i, cell.y + j) != cell)
      yield Cell(cell.x+i, cell.y+j)

If you copy that function into the repl and run it with as follows:


you will get

List[Cell] = List(Cell(1,1), Cell(1,2), Cell(1,3), Cell(2,1), Cell(2,3), Cell(3,1), Cell(3,2), Cell(3,3))

and that matches the 3×3 grid above – woo hoo.

What we need to do now is work out if the cell is going to live on or die off. However, we aren’t going to be dealing with one cell, but rather a collection of cells – otherwise it’s going to be a very short Game of Life.

To help with this problem I created another case class, which for the current purpose looks as follows:

case class Grid(x: Int, y: Int, livingCells: List[Cell])

The x and y are the size of the grid and livingCells are our seed cells – remember that we only care about the LIVING cells. So, first things first, given a cell, we need to establish it’s local environment – a count of the living cells – but, we need to do it for multiple cells in one sitting. How?

  def candidates(livingCells: List[Cell]): Map[Cell, Int] = {
      .foldLeft(Map.empty[Cell, Int]) { (amap, acell) => amap ++ Map(acell -> (amap.getOrElse(acell, 0) + 1)) }

The first step is to calculate the neighbourhood – we use the `neighbours` function defined above. To apply it to all cells in the living list and return a simple flattened list, we use flatMap. This function returns a list of all potential cells and the number of count of living neighbours, something like:

        Cell(0,3)   -> 1,
        Cell(3,0)   -> 1,
        Cell(1,1)   -> 4,
        Cell(1,-1)  -> 2,

The map is going to contain cells that don’t exist! That kind of sucks, but it isn’t the end of the world, as I stated above, we (I) need two functions here.

Enter the second function: step. Step is as follows:

  def step(grid: Grid) = {
    val newCells = candidates(grid.livingCells).filter { case (loc, living) =>
      living == 3 || living == 2 && grid.isAlive(loc)
    grid.copy(livingCells = newCells)

The step function takes our grid – the list of living cells – and returns a new grid with our updated cells. The first step is to calculate the possible candidates, it then filters the map so that we are left with only cells that have a count of 3 (living cells) or 2 living cells and where the cell is currently alive. Since this will return a Map of cells and counts matching our criteria, we want to keep only the keys (the grid coordinates). We can get only the living cells by calling .keys on the filtered map. This is then converted to a list and a copy of our existing grid is created – with new living cells.

But wait, there’s no .isAlive!?!?

We need to update the grid class. Copy-pastey the following function into the Grid case class

def isAlive(cell: Cell) = livingCells contains cell

This function returns true if the grid.livingCells contains the passed Cell.

I’ve quickly tested this code (only tried a 3×3 with a blinker) and pushed to github – take a look for tests and bad code :)

This post has been fuelled by Meantime Pale Ale and Brooklyn Lager.

Reverse Array/List: Interview Question

I was having a chat with a couple of guys at work about interviews and their experiences at various companies and there were a couple of common problems that popped up: fizzbuzz, reverse an array and which number has been removed from the array – you are allowed only one pass and you know what the supposed total of the complete array is.

I have an entry on here somewhere about fizzbuzz so won’t go over it.  The number missing from the array is simple: sum the values left in the array and subtract from the total.

Reverse an array is a bit more fun than the missing number and is more about recursion than fizzbuzz.  Since it’s sunday night, I am somewhat bored and my wife is looking at cat pictures, I figured I would chuck out a couple of implementations in a couple of languages.

Elixir- based on the Erlang vm has ruby-ish syntax & looks like a really nice new language

defmodule Hack do
  def reverse_list([h|t], acc // []) do
    Hack.reverse_list(t, List.concat([h], acc))

  def reverse_list([], acc) do

This code requires that you compile the code first using elixrc and fire up iex. You can run it as a script (.exs) as shown here, but I’ve got a couple of issues with it (see code annotations)

The same can be achieved using foldl

List.foldl([1,2,3], [], fn (i, acc) -> List.concat([i], acc) end)

Scala- based on the Java vm

def reverser[T](x:List[T]): List[T] = x match {
    case Nil => List.empty
    case x::xs => reverser(xs) :+ x

you can also do this with a foldLeft

List(1,2,3).foldLeft(List[Int]())( (a, b) => b +: a)

Clojure – based on the java vm

(defn reverser [coll] (reduce conj '() coll))

Connecting Hive and R via MySQL using Sqoop

Rrrrrr – because pirates > ninjas

This is all kind of backwards, why would I want to take data out of cascalog/hdfs to MySQL and then back into hdfs/hive? Anyway, the general aim here it to get some data, from Hive, into R and do something with it.

Step 1: Get Data
In the first step we will grab some data from an existing database, created when doing a demo of cascalog, and load it into Hive using a tool called sqoop. Sqoop is a tool designed to move large quantities of data between hdfs and structured datastores.

Download sqoop and cd into bin/

./sqoop import --connect jdbc:mysql://localhost/cascalog  --username dan  --table unique_user_counts -m 1 --hive-import

This will import data from the cascalog MySQL database, specifically the unique_user_counts database. The –hive-import switch will place data into Hive (I installed Hive using Homebrew). The -m toggle defines the number of mappers the job will use, in this instance just 1.

You can take a peek at the data using the Hive repl.

λ bin  hive
hive> show databases;
Time taken: 2.48 seconds
hive> use default;
Time taken: 0.014 seconds
hive> show tables;
Time taken: 0.286 seconds
hive> select * from unique_user_counts;

This dumps out the data. If you want to know more about your table then you can use the describe ‘table_name’ command.

To get data into R Hive must be started in server mode, this can be achieved by

hive --service hiveserver

The final step is R …


hive_jars <-list.files("/usr/local/Cellar/hadoop/1.1.1/libexec", pattern="jar$", full.names=T)
hadoop_jars <- list.files("/usr/local/Cellar/hive/0.9.0/libexec/lib", pattern="jar$", full.names=T)

hive_driver <- JDBC("org.apache.hadoop.hive.jdbc.HiveDriver", c(hive_jars, hadoop_jars))

hive_conn <- dbConnect(hive_driver, "jdbc:hive://localhost:10000/default")

rs <- dbGetQuery(hive_conn,"select client, count from unique_user_counts")

x <- 1:length(rs$count)
plot(x, rs$count)

This gives me a very boring plot – it’s really of no interest at all, it’s marginally more interesting than doing a select count(*) from …

The *interesting* part here is the definition of the jars required to connect to Hive, this is largely identical to a previous post.

Connect to Lucid using R

*will work for any JDBC db*

First you will need to install the RJDBC library. I did this using the R package manager (remember to include all dependencies).

The following lines connect to my local lucid instance, select the number of users from my pretend visitors table and stores the result in the ‘result’ variable.

driver <- JDBC("org.luciddb.jdbc.LucidDbClientDriver", "/Users/dan/Desktop/LucidDbClient.jar", identifier.quote="'")
conn <- dbConnect(driver, "jdbc:luciddb:http://localhost:8034", "sa", "")
result <- dbGetQuery(conn, "select count(*) from visitors")

Remember, be a good person and close your connection :)
You can find out more from about RJDBC here http://cran.r-project.org/web/packages/RJDBC/index.html

Luciddb clojure (note to self)

connect to lucid with clojure and do something, anything.

(use 'clojure.java.jdbc)

(def db {:classname "org.luciddb.jdbc.LucidDbClientDriver"
  :subname "http://localhost:8034"
  :subprotocol "luciddb"
  :user "sa"})

(with-connection db
  (with-query-results rs ["select count(*) as ctx from reports.pages"]
  (doseq [row rs] (println (:ctx row)))))

the result is: 1400000 – very exciting. :ctx is the column identifier – this would, in scala/java look something like => rs getInt “CTX” || rs.getInt(“CTX”) if you want to be verbose!

There is a small amount of work to that has to be done before this code will run – we must first get the required lucid jars. I followed the steps described here: http://www.pgrs.net/2011/10/30/using-local-jars-with-leiningen/. This resulted in maven_repository in the directory I ran mvn install. The problem was that the .jar wasn’t copied into that maven repo – the pom was. I did an ls of ~/.m2 and found a new lucid directory – again it had the pom & no jar. I copied the .jar to the same place as the pom and things worked. The thing i *really* hate about java** (and the langs that use the jvm) is the class path mess – perhaps python/perl have spoilt me.

☁  0.9.4  pwd

☁  0.9.4  ls
COPYING                   META-INF                  luciddb-0.9.4.jar         net
FarragoRelease.properties de                        luciddb-0.9.4.pom         org

** the gem system … f&%$ ruby gems (i use zsh/ohmyzsh which, i am told, is part of the problem)