Darth Null’s Ramblings

DarthNull.org • About Ⓘ

Hello! I'm David Schuetz.
This is where I ramble about...stuff.

ShmooCon 2017 Badge (and more) Contest - Solutions

Shall We Play A Game?

It’s been a long time since I did a big puzzle solution post, and even longer since I played a crypto contest at ShmooCon. That’s about to change. :)

After winning three years in a row, and running the ShmooCon contest for four years after that, I finally stepped away from the fray in 2016. But I did help out a little, commenting on the puzzles they were putting together and generally offering advice. This year, though, about 2 weeks before ShmooCon started, it dawned on me: I haven’t heard a single thing about the contest. I CAN PLAY!

But I didn’t jump right in. I even got some taunting comments over Twitter from the contest organizers, urging me to try a few of the puzzles. Then Saturday morning, I went to chat with them in the Chill Out room, and joked that I should just “find the team that’s in 3rd place and help them.” Then I turned to the table next to me, and found out they were, in fact, in 3rd place.

So I sat down and went through what they’d accomplished so far (quite a bit, actually) and gave them some suggestions for a couple other puzzles. But I was determined to stay kind of in the background. Then I started looking more closely at the chain of puzzles which started with the conference badges… and got sucked right in. And that’s how I ended up a (silent) part of the Pikachu Mafia.

My initial attempts to keep my distance were pretty well summed up by this tweet from my wife, regarding whether or not my bag was still hiding in a corner at registration:

"As long as my bag is here, I’m not working the puzzle" - @DarthNull, at breakfast
2 hours later... bag gone. #CanStopAnyTime

Yes. I can stop anytime I want to. Apparently, I don’t want to. :)

Contest Overview

The contest followed the same basic pattern that’s been used for the last two years at ShmooCon, which was itself inspired by the 2014 BSidesLV contest. It’s a series of games and puzzles, some of which are just silly things to do, some are harder puzzles, and some are chained-together cryptography challenges.

It included four “tracks” of puzzles, based on the tracks of the conference: One Track Mind (red), Build It (purple), Belay It (blue), and Bring It On (orange). Scoring was based on blocks (10 points), pieces (individual groups making single Tetris-like pieces, 5 points each)), Tracks (both pieces of the same color, 20 points), and rows (completing each of the bottom three rows of the puzzle board, 20 points for each row). A bonus was also added for the first team to complete each of the scoring items (1 point for blocks, 4 for pieces, and 8 points for first to complete individual tracks and rows).

Tetris Board

** SPOILERS BELOW ** If you’d like to try to solve some of these challenges on your own, go to this spoiler-free list of challenges.

I’m Not a Ringer

I do have a bit of a reputation for these games, and my presence on the team did not go unnoticed by RPISEC and Decipher, the teams which had been jockying for first place since the contest began. However, I’d like to note the following:

So this was definitely a team effort. Hats off to the team — they worked their butts off for many of these, and definitely earned their prizes (I didn’t claim any prizes, mostly because I’m on con staff and so don’t need tickets).

Also, I’d like to point out that I have a very well known and documented habit of getting totally bogged down in the wrong path, and definitely got stuck badly on a couple of the puzzles this time.

Puzzle Solutions

Belay It 1 - Total Control

This one was built from a series of images on signs around the conference. Each sign included 16 images of a game controller (though one had only 12 images). The only thing that varied amongst these images was the D-pad buttons, which were either totally blank, or completely filled in, or a thin outline of a button. This immediately suggested a trinary code to me, and I said as much to the team. The hard part would be determining the “most significant trinary-bit (trit?).

Sign Sign Sign Sign Sign Sign

After suggesting this, I went back into a rabbit hole on other puzzles, and when I came back up for air, they’d finished most of the puzzle. I’m not sure how they solved it, but here’s how I did it myself a few days later.

The D-Pad buttons are either Outlined (O), Filled (F), or Blank (B). Starting with left and going clockwise:







(these are already in the right order, but in practice it’s not hard to reassemble them once you’ve solved the individual sequences).

I didn’t see any “OOOO,” and in fact there’s never anything but the plain “outline” for the leftmost button, so it looks like this encodes numbers from 1-26. Convenient, isn’t it? (222 in trinary is 29 + 23 + 2*1). So “E” would be 5, or 0012 in trinary. To jump-start the decoding, I counted the frequency of each four-character symbol:

     LURD (left, up, right, down)
  16  OBFO 
   8  OFOO 
   8  OBBO 
   7  OFOB 
   6  OOFF 
   6  OBOB 
   6  OBFF 
   5  OOBF 
   4  OOOF 
   4  OOOB 
   4  OBFB 
   3  OOFO 
   3  OFFO 
   3  OFFF 
   3  OBOF 
   2  OOBO 
   2  OFBF 
   1  OOFB 
   1  OFOF 

It’s looking pretty much like OBFO is E (with 16 occurrences), though that might also be a space (likely Z). Let’s assume it’s E. Now, is B the 1 or is F? Another common letter should be T, which is 0202. This could be the symbol at #8 (OBB0) except that it’s in the same two positions as the buttons used for E, where T should use the 9s position. So…#6? OOFF? That’d mean B = 1, F = 2, and the order is Left, Down, Right, Up. (or counterclockwise from left, exactly opposite my initial assumption. Figures.)

Rearranging the columns and converting letters to trinary, and trinary to decimal, then decimal to letters, we get (and including the original arrangement of buttons):

     LDRU                 LURD
  16 OOFB  012 - 5   E    OBFO
   8 OOOF  001 - 1   A    OFOO
   8 OOBB  022 - 8   H    OBBO
   7 OBOF  201 - 19  S    OFOB
   6 OFFO  110 - 11  L    OOFF
   6 OBOB  202 - 20  T    OBOB
   6 OFFB  112 - 14  N    OBFF
   5 OFBO  120 - 15  O    OOBF
   4 OFOO  100 - 9   I    OOOF
   4 OBOO  200 - 18  R    OOOB
   4 OBFB  212 - 23  W    OBFB
   3 OOFO  010 - 3   C    OOFO
   3 OOFF  011 - 4   D    OFFO
   3 OFFF  111 - 13  M    OFFF
   3 OFOB  102 - 11  K    OBOF
   2 OOBO  020 - 6   F    OOBO
   2 OFBF  121 - 16  P    OFBF
   1 OBFO  210 - 21  U    OOFB
   1 OFOF  101 - 10  J    OFOF

(I didn’t do this all at once, but instead tried a few here and there until I was confident that I was getting reasonable results…but it’s clear that I could’ve gone right to this from my initial guesses). The final message, then, using my original clockwise ordering of the buttons (the last column above), we get:

       T    H    E    R    I    S    H    T    M    O    O    S    E    I    N    T        

       H    E    W    R    O    N    S    J    L    A    C    E    C    A    N    M        

       A    K    E    A    L    L    T    H    E    D    I    F    F    E    R    E        

       N    C    E    I    N    T    H    E    W    O    R    L    D    S    O    W        

       A    K    E    U    P    E    A    K    E    W    P    A    N    D    S    M        

       E    L    L    T    H    E    H    A    S    H    E    S                            

   EAKE WP AND SMELL THE HASHES                            

Obviously I transcribed a couple things wrong, but the message is clear:

"The right moose in the wrong place can make all the difference in the world So wake up wake up and smell the hashes"

Belay It 2 - Pseudo-random


Go to /oneymasoon, see text “Setec Astronomy”.

I actually figured out pretty quickly, the night before, that “oneyamasoon” is an anagram for “anonymoose.” But I never tried loading /anonymoose. Duh. Had I done that I would’ve found the answer for the stage:

On the Internet nobody knows you are a moose!

Belay It 3 - Stonecutter


A very simple code using the Pigpen cipher. “Should have used wingdings”

Belay It 4 - Scrapple


This is a simple bacon cipher (essentially, a 5-bit binary code using straight and italicized characters to represent 0 and 1). Decodes to: CAKE IS A LIE” (which will come up again later…)

Belay It 5 - Who you gonna call?

Links to an MP3 file: whoyougonnacall.mp3

This is simply DTMF tones for “0073735963” which is a cheat code from..I guess..a Mike Tyson Punch out game? I would not have got that one (I’d’ve done the decode, but it would’ve taken a while to figure out it was a cheat code from a video game…)

Belay It 6 - Boring Compound


These are atomic weights, with no spaces between the individual elements listed. So you kind of have to manually break it all up. It works out like this (with one notable exception):

In  114.818 
S    32.065
Pa  231.03588
Ce  140.116 
No  102 [used atomic number, not weight]
O    15.9994

Ne   20.1797 
Ca   40.078
N    14.0067
He    4.002602
Ar   39.948

Y    88.90585
O    15.9994
U   238.02891
H     1.00794
Ac  227
K  39.0983


Belay It 7 - (Data, Points)

Follow the white rabbit. (link to a screen full of chessboards).

The chessboard images are named 1A, 1B, 1C, 2A, 2B, 2C, and 3A, 3B, 3C. That sort of implies putting them into a 3x3 grid, and I saw a teammate working with such an image, thinking it would be a QR code. In fact, it was a Data Matrix code, but they were clearly on the right track, and once they completed blotting out every chess piece with a black square (getting something like the image below), we had the answer (a link to the url at /punchout).

Data Matrix Code

The URL is sufficient to win this stage. The content at the URL is needed for the next stage.

Belay It 8 - Screentest

The puzzle simply links to an ASCII punch card (using the word “loom,” possibly a reference to early uses of punch cards for controlling weaving machines):

 / ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
/  ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
|  00000000000000000000000000000000000000000000000000000000000000000000000000000000
|  11111111111111111111111111111111111111111111111111111111111111111111111111111111
|  22222222222222222222222222222222222222222222222222222222222222222222222222222222
|  33333333333333333333333333333333333333333333333333333333333333333333333333333333
|  44444444444444444444444444444444444444444444444444444444444444444444444444444444
|  55555555555555555555555555555555555555555555555555555555555555555555555555555555
|  66666666666666666666666666666666666666666666666666666666666666666666666666666666
|  77777777777777777777777777777777777777777777777777777777777777777777777777777777
|  88888888888888888888888888888888888888888888888888888888888888888888888888888888
|  99999999999999999999999999999999999999999999999999999999999999999999999999999999

The result to #7 provides these blocks of hex data:

76 69 20 70 75 6E 63 68 63 61 72 64 20 

6C 72 20 6C 72 20 33 6C 72 20 34 6C 72 20 6C 72 20 32 6C 72 
20 6C 72 20 34 6C 72 20 32 6C 72 20 32 6C 52 20 20 20 20 20 
1B 32 6C 52 20 20 20 20 1B 32 6C 52 20 20 20 1B 32 6C 52 20 
20 20 1B 33 6C 72 20 32 6C 72 20 6C 72 20 32 6C 72 20 6C 72 
20 33 6C 72 20 6C 6C 52 20 20 20 20 20 20 20 1B 32 6C 72 20 
6C 72 20 33 6C 72 20 6C 72 20 6A 30 33 6C 72 20 33 6C 72 20 
32 6C 72 20 34 6C 52 20 20 20 20 1B 33 6C 72 20 32 6C 72 20 
32 6C 72 20 6C 72 20 36 6C 72 20 33 6C 72 20 6C 72 20 32 6C 
72 20 34 6C 72 20 33 6C 72 20 32 6C 72 20 33 6C 72 20 34 6C 
52 20 20 20 1B 32 6C 72 20 6C 72 20 34 6C 52 20 20 20 20 20 
1B 6A 30 52 20 20 20 20 1B 32 6C 52 20 20 1B 32 6C 52 20 20 
20 1B 34 6C 52 20 20 1B 32 6C 52 20 20 20 20 20 1B 33 6C 52 
20 20 1B 33 6C 72 20 32 6C 72 20 34 6C 52 20 20 20 20 20 1B 
33 6C 52 20 20 20 1B 33 6C 72 20 33 6C 52 20 20 1B 38 6C 52 
20 20 20 20 1B 33 6C 52 20 20 1B 6A 30 6C 6C 72 20 31 35 6C 
72 20 34 6C 72 20 36 6C 72 20 6A 30 36 6C 72 20 38 6C 72 20 
33 35 6C 72 20 31 31 6C 72 20 6A 30 33 6C 72 20 35 6C 72 20 
37 6C 72 20 34 6C 72 20 6C 72 20 31 32 6C 72 20 33 6C 72 20 
37 6C 72 20 31 32 6C 72 20 37 6C 72 20 6A 30 31 32 6C 72 20 
36 6C 72 20 37 6C 72 20 35 6C 72 20 38 6C 72 20 36 6C 72 20 
37 6C 72 20 6A 30 31 31 6C 72 20 31 33 6C 72 20 6C 72 20 33 
6C 72 20 6C 72 20 38 6C 72 20 32 6C 72 20 34 6C 72 20 34 6C 
72 20 39 6C 72 20 6A 30 72 20 31 33 6C 72 20 39 6C 72 20 31 
31 6C 72 20 31 39 6C 72 20 6C 72 20 32 6C 72 20 37 6C 72 20 
6A 30 36 35 6C 72 20 6A 30 6C 72 20 38 6C 72 20 31 38 6C 72 
20 39 6C 72 20 33 6C 72 20 31 31 6C 72 20 38 6C 72 20 36 6C 
72 20 6C 72 20 6A 30 35 6C 72 20 31 36 6C 72 20 32 30 6C 72 
20 35 6C 72 20 31 33 6C 72 20 34 6C 72

The first decodes simply to “vi punchcard”, which tells you to apply it to problem 8 and that you should be using the vi editor. The second decodes to a sequence of vi commands. If you delete the border of the card (the top and side edges, so that the fuzzy black bar and numbers are against the top and left edges of the editor window), then simply pasting the content from the second block will “punch” out the card for you.

░  ░░ ░░░  ░  ░░░ ░ ░     ░    ░   ░   ░░ ░  ░  ░░ ░       ░  ░░  ░░░░░░░░░░░░░░
░░░ ░░ ░ ░░░    ░░ ░ ░  ░░░░░ ░░  ░ ░░░ ░░ ░ ░░ ░░░   ░  ░░░     ░░░░░░░░░░░░░░░
    0  0   000  0     00  00 0 000     00   00 00  0000000    00  00000000000000
11 11111111111111 111 11111 1111111111111111111111111111111111111111111111111111
222222 2222222 2222222222222222222222222222222222 2222222222 2222222222222222222
333 3333 333333 333  33333333333 33 333333 33333333333 333333 333333333333333333
444444444444 44444 444444 4444 4444444 44444 444444 4444444444444444444444444444
55555555555 555555555555  55  5555555 5 555 555 55555555 55555555555555555555555
 666666666666 66666666 6666666666 666666666666666666  6 666666 66666666666666666
77777777777777777777777777777777777777777777777777777777777777777 77777777777777
8 8888888 88888888888888888 88888888 88 8888888888 8888888 88888  88888888888888
99999 999999999999999 9999999999999999999 9999 999999999999 99999999999999999999

Unfortunately, there were still some problems, and I haven’t looked deeper to figure out whether it’s a problem with vi on the mac, or with errors in the contest code. However, the result is still clear enough to win credit.

We looked around for a while to try and find a good, automatic decoder page. This punch card emulator mostly worked, but had some issues with lowercase letters (basically, it only handled single punches in the top three “control area” rows). One team member (I’m not using their names as I never really got them… :), nor explicit permission to use their names anyway)… One person got most of the puzzle figured out using this page, but then his laptop battery died. I picked up where he left off, manually decoding using the EBCDIC section here. Numbering the rows 12,11,10 at the top, then 1-9 below, the EBCDIC table easily converts from the punch card to text. For example, in the 1st column, 10 (0) and 6 are punched, which corresponds to a capital W. The next column as 12 (top) and 10 (0), as well as 8, which is a lowercase h. And so forth.

Unfortunately, a few columns had too many punches in them, and again I’m not sure where that problem came from. In the end, here’s the decoding we got:

12 ░..░░.░░░..░..░░░.░.░.....░....░...░...░░.░..░..░░.░.......░..░░..░░░░░░░░░░░░░░
11 ░░░.░░.░.░░░....░░.░.░..░░░░░.░░..░.░░░.░░.░.░░.░░░...░..░░░.....░░░░░░░░░░░░░░░
10 ....0..0...000..0.....00..00.0.000.....00...00.00..0000000....00..00000000000000
1  11.11111111111111.111.11111.1111111111111111111111111111111111111111111111111111
2  222222.2222222.2222222222222222222222222222222222.2222222222.2222222222222222222
3  333.3333.333333.333..33333333333.33.333333.33333333333.333333.333333333333333333
4  444444444444.44444.444444.4444.4444444.44444.444444.4444444444444444444444444444
5  55555555555.555555555555..55..5555555.5.555.555.55555555.55555555555555555555555
6  .666666666666.66666666.6666666666.666666666666666666..6.666666.66666666666666666
7  77777777777777777777777777777777777777777777777777777777777777777.77777777777777
8  8.8888888.88888888888888888.88888888.88.8888888888.8888888.88888..88888888888888
9  99999.999999999999999.9999999999999999999.9999.999999999999.99999999999999999999
   What is the most auctio ed Hend lo thed) item in ShMooCon hz  O y?

The correct decoding is “What is the most auctioned (and loathed) item in ShmooCon history?” and the answer, which we simply guessed from “most auctioned item” was “The Stargate.” Which was enough to win credit for the puzzle.

Build It

Build It 1 - Press Any key To start

2A1494 AA23A3 129213 931292 39B920 A01898 12921F 9F31B1 28A814 9439B9 1F9F12 921292 32B239 
B91494 189839 B930B0 129239 B91E9E 31B115 9539B9 1E9E31 B11595 39B925 A51292 15952A 0282AA

I wasn’t sure where to go with this, and suggested looking up the 1st two bytes to see if it matched the “magic sequence” for any known file types. Then a tweet suggested “hINT 9” for this puzzle. I asked the team what “INT 9” was (for PC interrupts) and was told it’s a keyboard handler. I immediately said “It’s keyboard scan codes.” And next thing I knew, they had the answer:

"There doesn't seem to be any any key!"

Build It 6 - Primary Colors

L G R W F e G t e d u C o O m A T n o W u T c i K Z S q l o M t o l V V h p N P Y a E H L A N y X Q i d I S F N f D z Q g B I z h e M x D J c Q G B a P X s Q U w s I s e y e S W c t g E C j E o G L E R O U O w O i S g y L A A a M D w w J D w U e X U t c G I r z m O C W E N w b o L B o s n D E B m N s a e l c V n Z l a U D K s R L E w V e M h S N H t f w o i p p g a P V T k G L F Q A q Y Z f G M i z Q W X n g

At first, I figured this was going to be “zoom in on the image, find the hex value for each color, turn that into ASCII,” and just didn’t want to play. Later, I actually tried zooming in and realized that each letter had varying shades of color, due to the way they were rendered on the screen. Oh, and that it wasn’t an image, but instead HTML code.

Turns out that you simply had to find all letters with a color code that included either half- or full-brightness of a SINGLE color channel only. That is, where the Red, Green, or Blue elements of the hexcode was 80 or FF.

For example, in this fragment:

.... <font color="#0000FF"> T </font><font color="#228B22"> c </font><font color="#006400"> i ....

Only the T letter should be copied out as part of the final plaintext.

Because I’m a geek who would rather spend 20 minutes finding a single shell pipeline to solve a problem, instead of 5 minutes to Just Do It By Hand, here’s a way to solve it quickly:

$ cat colors | sed 's/font/@/g' | tr '@' '\n'| grep '#' | cut -c 10-15 -c 18-19 |egrep '(FF)|(80)' | cut -c 8-8 | tr -d '\n'

In this case, the HTML segment (reproduced above) containing only the colored letters is stored in a file called “colors.” I change all occurrences of the word “font” to “@”, then from “@” to newlines (just because it’s easier than remembering out how to escape \n in the sed command), and filter on only lines with “#” in them. This has the effect of giving me an output like this:

color="#0000CD"> L </
color="#D02090"> G </
color="#006400"> R </
color="#008000"> W </
color="#D02090"> F </
color="#006400"> e </
color="#006400"> G </
color="#006400"> t </
color="#FF0000"> e </

Then, I strip out just the hex color code and the corresponding letter, using the “cut” command, and further filter only on lines with FF or 80 in them (fortunately, none of the composite colors had either of these values…otherwise I would’ve just used a more specific filter). Finally, for those lines which match, I cut out just the last letter, and delete all the newlines I put in earlier, to get:


We do not stop playing because we grow old. We grow old because we stop playing.

HA! Take that, @aschuetz. I waste my time on these silly games to stay young! :)

(I’ve no idea how the team solved this one…they finished before I joined. I solved it up on my own after the con ended…).

Build It 7 - Think Kwick

There are three keys to success.

All three of the puzzle makers had keys around their necks. Members of the team took them to lockpick village and measured the biting on keys. They later tweeted a picture of all the keys, as well. With the decoded values, we had something like this:

Key Codes

Unfortunately, when we first got the codes, we had them in reverse order (41621, 26154, and 14411). We tried a few different decodings, really hoping that octal-encoded ASCII would be the right answer, but got nowhere. Then later we learned that we’d had the numbers backwards, and the right answer was quickly found. The only way to string them together to form ASCII octal was:

    12614 11441 45162

    126 141 144 145 162
    V   A   D   E   R

Build It 8 - Now Turnkey


I immediately said “This is an ADFGX cipher,” and pointed the team to some online tools and wiki pages. They noticed the V later in the text (well, now that I look at it, in the second block), and correctly pivoted to the ADFGVX variant, used the word “VADER” from the last puzzle, and submitted the solution:


Bring It On

Bring It On 3 - Eat bit by bit


This was simple morse code. However, not so simple, was the fact that they left out breaks between letters. So you had to manually try different letter breaks until you eventually got a message that made sense. The team got stuck multiple times about halfway through, but eventually announced “something about maze of ..twisty…?” I recognized the reference, said “Try ‘Maze of Twisty Passages,’” which worked, and boom! Another block solved.

Raw morse code:


Broken up into letters:

-.-- --- ..- ... --- .-.. ...- . -.. .- -- .- --.. . --- ..-. - .-- .. ... - -.-- .--. .- ... ... .- --. . ...

Actual solution:


Bring It On 5 - Don’t Use Rumkin


I looked at this, tried a couple things, and gave up. I figured it was going to be a pain, and just didn’t want pain. Then the guys running the contest started to hound me to try it…to the point where every time I got distracted by helping the Pikachu with another puzzle, they’d say “he’s stopped again!” (their table was next to ours). Finally, I gave in, after they said “Just go to rumkin and do the first, most natural, thing you can think of.”

So I tried a Vigenère cipher with DARTHNULL as key. Didn’t work. Hmpf.

Judging by their reactions, though, I was on the right track, so I tried SHMOOCON, PASSWORD, and eventually got it. I’m not sure if I actually guessed the key or if I tried cribbing it by assuming the result began with “YOU”, but the right key was… “RUMKIN.” Which gave the result:


Bring It On 6 - Triskaidekaphobia


This was ShmooCon 13, beginning on Friday the 13th, so it was natural that some kind of focus on 13 would happen at some point. This stage, as it turns out, is simply do ROT-13 to “SHMOOCON” to get “FUZBBPBA”. I’d’ve never even submitted that as it just didn’t seem to make sense, but fortunately the rest of the team wasn’t quite so picky.

Bring It On 7 - Get Crackin’

This stage included a link to a password file:


Googling for the first password hash revealed “toor” as the password — the userid, spelled backwards. Unfortunately, the second password can’t be found via Google. However, it’s possible to manually test passwords using the OpenSSL command. For example, the following uses the salt (“/M”, between the 2nd and 3rd ‘$’ symbols in the hash), and passes the password “toor” in via standard input, to generate, hopefully, the same hash as seen above:

$ echo -n toor | openssl passwd -stdin -1 -salt /M

and it does. So, let’s try “moose” backwards for the second one:

$ echo -n esoom | openssl passwd -stdin -1 -salt cd611a44

Yup! That’s it. I believe the team simply had to submit the two passwords “toor” and “esoom” for credit (this was completed before I joined).

Bring It On 8 - WHOPPER

The puzzle links to a large map, with a userid and password field. Also completed long before I joined, this one initially had me trying random passwords related to War Games. Turns out that you simply needed to use the “moose / esoom” credentials from the previous stage. After doing that, an animated gif displays:


It takes a moment to get started, but eventually you see a cursor blinking…and as you watch, it should become apparent that the cursor is blinking out a pattern.

To read it out, it’s easiest to film the gif and slow it down… (or, probably simpler, just find an app that lets you edit animated gifs…) Once this is done, you get a list of numbers, 1-5, which I’ve paired together below. (for example: the sequence begins with 3 short blinks, a pause, then a single blink, another pause, another singleton, then 5 blinks, etc…) I’ve paired up the numbers here:

31 15 44 44 23 15 52 34 34 25 24 15 52 24 33

The numbers end up being a Knock code:

    1 2 3 4 5
1   a b c d e
2   f g h i k
3   l m n o p
4   q r s t u
5   v w x y z

So “31” tells yo to go to row 3, column 1, for “L”. “15” is likewise “E”, “44” is “T”, etc. The answer, then, is:


One Track Mind

Finally, we get to the stage where I did most of my work. Aside from “Don’t Use Rumkin,” the only puzzles I really solved on my own (as opposed to the few I supplied hints or suggestions to the team for) were all in this track. As inspired from multi-stage badge contests going back to ShmooCon 4, these elements all chain together, with the result of each stage providing a hint as to the method, or key, or both, for the next step. As is frequently the case, the hard part is getting started, even though the initial step was far easier than I made it out to be (and, turns out, was used almost exactly the same way multiple times in the past, including in one of my own puzzles.)

One Track Mind 1 - The System is Down


First, collect all the badges. They contained different elements related to video games of the past, but all included an 8-letter string of nonsense letters, and most (all but the Staff badge) included the name of a video game console system.

(attendee)  Atari /GNEATDEE
(attendee)  Gameboy /EWHAFNDI
(attendee)  Nintendo /ROTGOAAB
(events)    Xbox /TSSNNRHS
(speaker)   Playstation /AUIIIDTT
(staff)     /LINKXORS

The first step, which we totally missed until we had a typo pointed out to us, was easy. Arrange the badges in order of the consoles’ introduction (ignoring the staff badges) and read down:

Atari       GNEATDEE
Nintendo    ROTGOAAB
Gameboy     EWHAFNDI
Playstation AUIIIDTT
Xbox        TSSNNRHS


As I said, far easier than I’d initially thought (I was toying with ways to XOR all the badge codes together, as implied by the “/LINKXORS” code on the staff badge).

Of course, each element also had a “/” in front, so naturally we had to visit those pages on the ShmooTris site. Each stage provided a different, long, hexadecimal string:


Changing these to binary, and putting together one next to the other, you end up with a VERY long 5-pixel wide vertical strip, with text running down in 5x7 character blocks. Unfortunately, trying to scroll sideways wasn’t easy in my terminal program, so I wrote a script that output it in short blocks. And because it was sideways, we had to turn the laptop on edge to read it.

Also, I initially got the bits backwards, so all the letters were reversed… Lots of fun to decode.

Here’s the output of a significantly-improved script that writes the result normally:

*   * ***** *   *   *         *     *****  ***  ***** ***** *   *   *      
*   * *     *   *   *         *       *   *   *   *   *     **  *   *      
*   * *      * *    *         *       *   *       *   *     **  *   *      
***** ****    *     *         *       *    ***    *   ****  * * *   *      
*   * *       *     *         *       *       *   *   *     *  **   *      
*   * *       *               *       *   *   *   *   *     *  **          
*   * *****   *     *         ***** *****  ***    *   ***** *   *   *      

***** *****   *    ***        ***     *   *   *  ***  ***** ****   ***  *   *  ***     
  *     *    *    *   *       *  *   * *  **  * *   * *     *   * *   * *   * *   *    
  *     *         *           *   * *   * **  * *     *     *   * *   * *   * *        
  *     *          ***        *   * *   * * * * *  ** ****  ****  *   * *   *  ***     
  *     *             *       *   * ***** *  ** *   * *     * *   *   * *   *     *    
  *     *         *   *       *  *  *   * *  ** *   * *     *  *  *   * *   * *   *    
*****   *          ***        ***   *   * *   *  **** ***** *   *  ***   ***   ***     

****  ***         ***   ***          *   *      ***  *   * *****   *   
 *   *   *       *   * *   *        * *  *     *   * **  * *       *   
 *   *   *       *     *   *       *   * *     *   * **  * *       *   
 *   *   *       *  ** *   *       *   * *     *   * * * * ****    *   
 *   *   *       *   * *   *       ***** *     *   * *  ** *       *   
 *   *   *       *   * *   *       *   * *     *   * *  ** *           
 *    ***         ****  ***        *   * *****  ***  *   * *****   *   

*****   *   *   * *****       ***** *   * *****  ***                                 ***  
  *    * *  *  *  *             *   *   *   *   *   *                               ** ** 
  *   *   * * *   *             *   *   *   *   *                                    ***  
  *   *   * **    ****          *   *****   *    ***                                  *   
  *   ***** * *   *             *   *   *   *       *                                 **  
  *   *   * *  *  *             *   *   *   *   *   *                                 *   
  *   *   * *   * *****         *   *   * *****  ***    *     *     *     *           **  


One Track Mind 2 - Plug and Chug

Use logic

This next one was even more difficult for me. After much flailing about, we were told that the “key” in the previous message was, literally, a key. To what? Well, the /LINKXORS path from the staff badge provided another hex string, this one broken up into 5-nybble blocks:

    6D003 A165B BBE0F 5A3AF 30641 AEA5D 52669 C7B8B A1567 8EEF2 A4C57 3A83D 4ED2D 61DA8

The key was 5-bits wide, too, which kind of also implies a connection.

     111      01110
    11 11     11011
     111      01110
      1       00100
      11      00110
      1       00100
      11      00110

I tried multiple approaches, converting the 7 5-bit keys into 8-bit bytes, applying bits to the ciphertext in order, in columns, in all sorts of patterns… One sticking point was that the key was only 35 bits long, which didn’t make much sense either.

Finally, after more hints from the contest team, I got the right approach. First, take the hex stream and write it out in binary, as 4-bit nybbles:

01101101000000000011101000010110000 ....

Then write the keystream, just as it appears in the message, underneath it:

01110110110111000100001100010000110 ....

When you run out of key, just repeat. XOR the two together and use as index into the alphabet (using A=1).

6   D    0    0    3    A   1    6    1    (hex)
01101 10100 00000 00011 10100 00101 10000  (binary)
01110 11011 01110 00100 00110 00100 00110  (repeating keystream)
-----------------------------------------  (xor)
00011 01111 01110 00111 10010 00001 10110  (result)
3     15    14    7     18    1     20     (decimal)
C     O     N     G     R     A     T      (letters, A=1)


So that’s solved now, too. What’s next?

One Track Mind 3 - Key Liem Pie

070E0511 64080174 69096F63 001C630F 0C016D6C
031C626C 09186C08 06056507 1D1B0A0D 07676F1D
076F7207 74091706 65686A1E 0A737B1B 001C1004

I immediately thought this would be similar to a puzzle I used a couple years ago, where the key was a floating point representation of a mathematical constant (I used “e”). Here I’m guessing that it’s “pi”, and that I need to “chain” the plantext together in successive blocks, just like I did on my own puzzle, and as hinted in the last stage solution.

The contest runners thought this was a good start, but that the key was simpler. Then I saw a hint with the name of the puzzle written a little differently: “KEY LIEm PIE”. LIE. Is the Cake a Lie? (which was used as the result to an earlier puzzle as well). I tried “CAKE” as the key and the first block decoded to “DONT”, so I knew I was probably on the right track.

I think, when I did this puzzle before, that I separated the key and the running chain into two different elements, but that wasn’t quite what was done here. Basically, “CAKE” was the key for the first block, then “DONT” was key for the second (giving [space]GO[space]), that was the key for the 3rd, etc. Imagine that the “key” is “00000000” but the IV is “CAKE” and go from there:

070E0511 64080174 69096F63 001C630F 0C016D6C  (CT)
43414B45 444F4E54 20474F20 494D2043 4952434C  (chained key/IV stream)
--------------------------------------------  (xor)
444F4E54 20474F20 494D2043 4952434C 45532E20  (PT)
D O N T    G O    I N   C  I R C L  E S .     (text)


I completed this while waiting for dinner, then told myself I’d stop for a while. Yeah. Like I can do that.

One Track Mind 4 - Mutex


Following the URL from the last stage provides the following text:


There was no clue given for puzzle, though “Mutex” is a bit of a hint, as many programming systems use “semaphores” to help manage mutual exclusion of programming threads. Or something like that. It’s in wikipedia. Regardless, I immediately recognized that this would be a naval semaphore flag code, but put my phone down and ate dinner. Then as we were winding down and waiting to pay…I couldn’t leave well enough alone, and started decoding it, flipping back and forth between a note entry with the ciphertext and plaintext in progress, and a google image of semaphore codes.

In this representation, imagine (for example) that “SNE” means a flag held straight down (S) and another up-and-to-the-left (NE), from the perspective of the viewer. This is an E.



One Track Mind 5 - Just Following Orders


At this point, I went up to my room to relax a bit before the party, and started coding up the solution to this. I recognized all the elements: “REVERSE SKIP” made perfect sense to me, as did the “ROT 7”, and I had a pretty good idea of how “ADD HALF TO HALF” would work. But no matter what I tried, or which halves I added, I couldn’t get anything that made sense. I pinged the contest team, and they said they’d be outside the party for a bit, so I stopped by and told them where things were breaking down. Turns out there was some kind of glitch with they way they created the ciphertext. They fixed it and assured me that it’d work now — and that the result of the first couple steps would give a clear intermediate result.

I then went to the party, found the rest of Team Pikachu and let them know where I was on this puzzle, then hung out for a few hours chatting with people… Sometime after midnight (maybe closer to 1? I forget) I returned to our room and knocked out the solution…but it still didn’t work.

Turns out…they used the “One Time Pad” at Rumkin in “DECRYPT” mode, which subtracts. And for that, order matters… So….subtract “ALMOST DONE” from “DZZHRE….” (D-A, 3-0, 3, D. Z-L, 25-11, 14, O. etc.)… and it works. Here are all the steps put together:


                *    *    *    *    *    * .....   [skip 5, select letter, repeat...]



Add halves: DONTZLEAVEZHOMEZWITHOUTZIRWQPAZUYONCKDHSTE (really, subtract, I think..it's all weird)


Whew! I submitted the answer, copied the rest of the team, suggested what they should try next, and went to bed.

Well, not really…I just had to finish the next, trivial, stage first.

One Track Mind 6 - Timber


10000110011 10111011011 11101101100
00100111101 11000111011 00100100101
10001100101 01000111111 10101011101
10111110110 10011111000 10100001100
01101111100 01111010011 10011101010
11101101111 10100100001 11000101000
01100011100 00101010000 00110001111

I recognized this immediately as a Huffman tree. It’s used in data compression, to convert commonly-used letters to short “symbols” of bits, and less-frequently used letters to longer symbols of more bits. Fill out the boxes at the bottom with the letters from the last stage (IRWQPAZUYONCKDHSTE), then navigate the tree using the binary stream, with a “0” meaning “go left” and “1” for “go right”. Thus, since the string starts with “10000” or “RLLLL”, this brings you to the 8th box on the bottom, or “U”. “1100” (RRLL) gets to the 3rd from the end, or “S”, “111” to the last box on the right, “E”, etc.


I confirmed this with the contest team but asked to not be credited for it yet…instead, I gave the rest of the Pikachu Mafia some hints and explained what Huffman coding was, and let them solve it Sunday morning.

One Track Mind 7 - Mass Transportation

5 4 1 4 4 5 1 4 2 1 3 5 5 4
3 2 2 5 4 4 1 4 2 1 2 5 4 4
3 3 1 3 2 5 4 4 1 4 3 1 2 2
1 5 2 5 5 4 1 4 4 5 4 3 3 1
3 3 2 1 2 2 3 5 1 5 1 2 2 2
3 5 3 5 2 5 2 1 3 2 2 5 1 1
3 5 2 2 1 1 1 2 3 5 1 4 3 5

I’d guessed pretty early on that this was another 5-square based cipher, like the Knock code or ADFGVX. In this case, it’s a Polybius Square (hinted by “THE WAY IS SQUARE” i the last stage. I found an online tool and started trying various keys.

In many implementations, this cipher uses two stages: A substitution key (where the alphabet in the cipher square is scrambled by the key) and a transposition key (where the result is further scrambled). So I started off trying different keys, related to mass transportation near the con. “SUBWAY” and “DUPONT CIRCLE” and “METRO” and stuff like that, all because of the “KEY IS HERE” hint in the last stage.

I was told, though, that “Mass Transportation” was a veiled hint towards the cipher name. A “bus” being a kind of transportation, and “mass” meaning “many” or “poly”, so “poly bus.” Argh.

So I tried “HERE” and “HERE SQUARE” and “HERE AND” and other things, then finally just tried “SHMOOCON.” That worked, using only the substitution elements, not the transposition. So basically:

  1 2 3 4 5
1 S H M O C
2 N A B D E
3 F G I K L
4 P Q R T U
5 V W X Y Z

Then take the cipher text in pairs, and index the square in row/column order. “5 4” yields “Y”, “1 4” gives us “O”, etc:



I’d started this, then got stuck and started working on the next stage (translating Emoji to Hex), then came back to it when the answer hit me in the head. But I was glad that I’d started working on the last stage, as it turns out, as there was a LOT of transcribing to do…

One Track Mind 8 - :)


This would’ve been so much easier if these were presented as codes in HTML. But, no, it was just a picture. Fortunately, they used the same emoji set for this picture (they’re all Android emoji) and so once I found a good reference, at Emojipedia.org, it wasn’t too hard to look up a picture, find its value, and write it down.

I ended up paired with another Pikachu on this, which was great, cause even with two sets of eyes we still had a few errors. Not enough to keep us from completing the stage in the end, but enough to be annoying.

So after what seemed like forever, we had the entire image transcribed into hex. Each emoji entry started with “1F6” as part of their Unicode code point, so really it’s only the last two digits which mattered:

464413384649 15410D420148 491803254509 2A480F09450E
024601382202 030405484B46 3016004F0E45 1B4B091C1E47
071F224F4D0C 1E4E4D30410A 0E474746061E 1044051F0747
084F4D3A474F 022B1F490B48 4B1A

After finishing this, I went back to Polybius, hit on the right answer, and then gave hints to the rest of the team (so they could complete Polybius on their own for team credit). And then…I realized the significance of the Polybius solution:


Okay, a one-time-pad. That’s easy. Where’s the pad? Oh, it’s at “/LOL.” Which…gives me another screenful of emoji. I actually considered grabbing a box of shmooballs from registration to pour it over the contest teams’s heads. I really didn’t want to transcribe another set of emoji, but we buckled down again and did it. It went a little faster this time, since we had so much practice from the first one.


1F0B4601140C 351545072100 0C4A4C050A4F 0A1B47440A41
41094F160244 4A4A4404071F 1046450E4D00 3B0C4C484B15
464C021B022C 4A0608100943 42130808283E 440C4C4C2702
460B1E1A1307 470B4C1D441A 1234

I wrote a simple script to add the two together (ciphertext and one-time-pad), and got… nothing. Gah.

Then I was told “Yeah, you’re on the right track, but used the wrong function.” Of course…I’m an idiot. It should’ve been XOR. I think I had addition on the brain after the “Just Following Orders” puzzle.

So XORing the two streams together (and eventually correcting the few errors we had in transcription), we got:


(Actually, the apostrophe didn't come through right but I think that was their error, not ours...)

We immediately emailed this answer in, and based in the glow of a completed scoreboard:


Final Scores

So in the end, Pikachu Mafia were the only team to complete every puzzle on the board. Decipher came in a close second, only missing the last two of the One Track Mind track. We were first to solve 13 puzzles (Catch Them All - throw shmooballs at Heidi or Bruce, Triskaidekaphobia, Don’t Use Rumkin, Think Kwick, Now Turnkey, and all the One Track Mind puzzles). We were also first to complete both Bring It On tetris pieces, the second Build It piece, and both OTM pieces. Finally, we were first to complete three of the four tracks (RIPSEC beat us to finishing the Belay It track) and 2 of the three rows (Decipher finished the bottom row before we did).

To get a better handle on the bonuses we scored, I’d asked for the scoring data from the contest team, and so I was able to build this fun graph showing the top four teams.


It’s interesting to see RPISEC and Decipher battling it out for most of Friday, then Pikachu start catching up and, midday Saturday, taking over. Also the Avengers team put in a strong effort and solved a good number of puzzles. Sadly, it looks like RPI kind of fizzled out on Saturday. Decipher, however, put up a hell of a fight, and kept us plugging away at the puzzles without stop (can’t let ‘em catch up!!)


The bulk of the Pikachu Mafia team was called up to the stage during closing ceremonies to claim their prizes. As I wasn’t really able to win any prizes, I just sat back and let them bask in their glory, complete with matching pikachu hats. :) They won 3 tickets to ShmooCon next year, and the 4th team member selected some other prizes from the nice pile of swag on stage. Congratulations to them on a job very well done! Good luck next year! (and congratulations to the other teams, even the teams who only finished a few puzzles… I hope everyone had fun with the contest!)

Next Year

Will I play again next year? I don’t know. It’s fun, but it’s also fun not to be stressed. This was really the first time I’ve played any of these contests as part of a team (though I’ve teamed up with Alex Pinto for a couple of the Verizon DBIR puzzles), and it was definitely a different experience. Maybe I’ll join the 3rd place team again next year…. Or maybe I’ll even try to win on my own. Or maybe I’ll throw out bad hints just to confuse everyone. :) We’ll see…..

In the meantime, thanks again to the Pikachu Mafia for letting me ride their coattails and giving me incentive to solve the badge track! I’d forgotten how much fun this can be. :)

ShmooCon 2017 Badge (and more) Contest - Challenges

Belay It

1: Total Control

Look Around

* pictures on con signs outside rooms *

Sign Sign Sign Sign Sign Sign

2: Pseudo-random


go to /oneymasoon, see text "Setec Astronomy".

3: Stonecutter


4: Scrapple


5: Who you gonna call?


6: Boring Compound


7: (Data, Points)

Chess Chess Chess Chess Chess Chess Chess Chess Chess

8: Screentest

(link to “loom”, which presented this ASCII image:)

 / ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
/  ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
|  00000000000000000000000000000000000000000000000000000000000000000000000000000000
|  11111111111111111111111111111111111111111111111111111111111111111111111111111111
|  22222222222222222222222222222222222222222222222222222222222222222222222222222222
|  33333333333333333333333333333333333333333333333333333333333333333333333333333333
|  44444444444444444444444444444444444444444444444444444444444444444444444444444444
|  55555555555555555555555555555555555555555555555555555555555555555555555555555555
|  66666666666666666666666666666666666666666666666666666666666666666666666666666666
|  77777777777777777777777777777777777777777777777777777777777777777777777777777777
|  88888888888888888888888888888888888888888888888888888888888888888888888888888888
|  99999999999999999999999999999999999999999999999999999999999999999999999999999999

Also relevant was the result from solving #7:

76 69 20 70 75 6E 63 68 63 61 72 64 20 

6C 72 20 6C 72 20 33 6C 72 20 34 6C 72 20 6C 72 20 32 6C 72 
20 6C 72 20 34 6C 72 20 32 6C 72 20 32 6C 52 20 20 20 20 20 
1B 32 6C 52 20 20 20 20 1B 32 6C 52 20 20 20 1B 32 6C 52 20 
20 20 1B 33 6C 72 20 32 6C 72 20 6C 72 20 32 6C 72 20 6C 72 
20 33 6C 72 20 6C 6C 52 20 20 20 20 20 20 20 1B 32 6C 72 20 
6C 72 20 33 6C 72 20 6C 72 20 6A 30 33 6C 72 20 33 6C 72 20 
32 6C 72 20 34 6C 52 20 20 20 20 1B 33 6C 72 20 32 6C 72 20 
32 6C 72 20 6C 72 20 36 6C 72 20 33 6C 72 20 6C 72 20 32 6C 
72 20 34 6C 72 20 33 6C 72 20 32 6C 72 20 33 6C 72 20 34 6C 
52 20 20 20 1B 32 6C 72 20 6C 72 20 34 6C 52 20 20 20 20 20 
1B 6A 30 52 20 20 20 20 1B 32 6C 52 20 20 1B 32 6C 52 20 20 
20 1B 34 6C 52 20 20 1B 32 6C 52 20 20 20 20 20 1B 33 6C 52 
20 20 1B 33 6C 72 20 32 6C 72 20 34 6C 52 20 20 20 20 20 1B 
33 6C 52 20 20 20 1B 33 6C 72 20 33 6C 52 20 20 1B 38 6C 52 
20 20 20 20 1B 33 6C 52 20 20 1B 6A 30 6C 6C 72 20 31 35 6C 
72 20 34 6C 72 20 36 6C 72 20 6A 30 36 6C 72 20 38 6C 72 20 
33 35 6C 72 20 31 31 6C 72 20 6A 30 33 6C 72 20 35 6C 72 20 
37 6C 72 20 34 6C 72 20 6C 72 20 31 32 6C 72 20 33 6C 72 20 
37 6C 72 20 31 32 6C 72 20 37 6C 72 20 6A 30 31 32 6C 72 20 
36 6C 72 20 37 6C 72 20 35 6C 72 20 38 6C 72 20 36 6C 72 20 
37 6C 72 20 6A 30 31 31 6C 72 20 31 33 6C 72 20 6C 72 20 33 
6C 72 20 6C 72 20 38 6C 72 20 32 6C 72 20 34 6C 72 20 34 6C 
72 20 39 6C 72 20 6A 30 72 20 31 33 6C 72 20 39 6C 72 20 31 
31 6C 72 20 31 39 6C 72 20 6C 72 20 32 6C 72 20 37 6C 72 20 
6A 30 36 35 6C 72 20 6A 30 6C 72 20 38 6C 72 20 31 38 6C 72 
20 39 6C 72 20 33 6C 72 20 31 31 6C 72 20 38 6C 72 20 36 6C 
72 20 6C 72 20 6A 30 35 6C 72 20 31 36 6C 72 20 32 30 6C 72 
20 35 6C 72 20 31 33 6C 72 20 34 6C 72

Build It

1: Press Any key To start

2A1494 AA23A3 129213 931292 39B920 A01898 12921F 9F31B1 28A814 9439B9 1F9F12 921292 32B239 
B91494 189839 B930B0 129239 B91E9E 31B115 9539B9 1E9E31 B11595 39B925 A51292 15952A 0282AA

2: Would you like to play another game?

Using backpack items recreate a screenshot of pacman

3: ConWords

Use all the words, bonus points for highest multipliers:


4: Moostalgia

Make a "ShmooCon" video game cartridge art for your favorite system.

5: Moostermind

Defeat us in a game of Mastermind.

6: Primary Colors

L G R W F e G t e d u C o O m A T n o W u T c i K Z S q l o M t o l V V h p N P Y a E H L A N y X Q i d I S F N f D z Q g B I z h e M x D J c Q G B a P X s Q U w s I s e y e S W c t g E C j E o G L E R O U O w O i S g y L A A a M D w w J D w U e X U t c G I r z m O C W E N w b o L B o s n D E B m N s a e l c V n Z l a U D K s R L E w V e M h S N H t f w o i p p g a P V T k G L F Q A q Y Z f G M i z Q W X n g

7: Think Kwick

There are three keys to success.


8: Now Turnkey


Bring It On

1: Initiative

Roll 12 on our D12.

2: RPS

Beat each of us in a game of Rock, Paper, Scissors.

3: Eat bit by bit


4: Catch them all

Use your Shmooball to capture Bruce, Heidi, others? Show your work!

5: Don’t Use Rumkin


6: Triskaidekaphobia


7: Get Crackin’

(link to the following /etc/passwd file)



Links to a page with a world map and login field. With the correct credentials (from the previous stage) the following image is retrieved:


One Track Mind

1: The System is Down


collect all the badges:

    (speaker)   Playstation /AUIIIDTT
    (attendee)  Gameboy /EWHAFNDI
    (attendee)  Atari /GNEATDEE
    (staff)     /LINKXORS
    (attendee)  Nintendo /ROTGOAAB
    (events)    Xbox /TSSNNRHS

For each badge, treat the ciphertext fragment as part of a URL. Go to each (like shmootris.shmoocon.org/GNEATDEE) to get a long hex string.


2: Plug and Chug

Use logic

3: Key Liem Pie

070E0511 64080174 69096F63 001C630F 0C016D6C
031C626C 09186C08 06056507 1D1B0A0D 07676F1D
076F7207 74091706 65686A1E 0A737B1B 001C1004

4: Mutex


The last stage gave a URL fragment, go there and get the following text:


5: Just Following Orders


6: Timber


10000110011 10111011011 11101101100
00100111101 11000111011 00100100101
10001100101 01000111111 10101011101
10111110110 10011111000 10100001100
01101111100 01111010011 10011101010
11101101111 10100100001 11000101000
01100011100 00101010000 00110001111

7: Mass Transportation

5 4 1 4 4 5 1 4 2 1 3 5 5 4
3 2 2 5 4 4 1 4 2 1 2 5 4 4
3 3 1 3 2 5 4 4 1 4 3 1 2 2
1 5 2 5 5 4 1 4 4 5 4 3 3 1
3 3 2 1 2 2 3 5 1 5 1 2 2 2
3 5 3 5 2 5 2 1 3 2 2 5 1 1
3 5 2 2 1 1 1 2 3 5 1 4 3 5

8: :)


The result from the last stage ends with another URL fragment, linking to another image of emoji.


Poem Codes - WWII Crypto Techniques


A few years back, after I won my first crypto contest, the contest author, G. Mark Hardy, suggested I read Between Silk and Cyanide. Written by Leo Marks, it’s a first-person account of the difficulties managing cryptographic communications with field agents in Europe during World War II.

Much of the story centered on the “poem codes” used by the agents, but the technical details were kind of obscure and not clearly explained. So I thought I’d do my best to document how I think it worked. This probably isn’t the exact method they used, but hopefully it’ll be close enough that you can get the general idea, and understand some of the difficulties these agents faced.


The British Special Operations Executive, or SOE, was tasked with “running” agents in Europe during World War II. These agents primarily operated in occupied or enemy territory, such as France and Germany, and therefore any communication with England presented exceptional risks. To protect their message traffic, it was encrypted. But, unlike today, they couldn’t simply install S/MIME on their smartphone. Instead, they had to manually encrypt and decrypt each message, and hand them off to an operator who’d send it out over shortwave radio in Morse code.

The encryption system they used had to meet several very important criteria:

  1. Requires only pencil-and-paper
  2. Each message should have its own key
  3. Must be reasonably secure against cryptanalysis, and most importantly,
  4. Must not leave any lasting evidence of code use (no codebooks, etc.)

The poem code system met these requirements. It uses a reasonably straightforward procedure that can be executed on paper. It supports unique keys for each message, while the “master key” used to derive message keys is a poem, committed to memory by the agent. Finally, the actual encryption used is double columnar transposition, providing (for the time) very good security.

However, it had some drawbacks. Though this system is simple in its mechanics and can be easily learned, it’s cumbersome, time-consuming, and prone to errors — even a small error can render an entire message indecipherable. If a message is re-transmitted because of errors, sending the exact same ciphertext twice with different keys can leak valuable information to an attacker. A similar risk occurs if the same key is used twice for different messages of the same length. Finally, if the poem used by an agent is ever discovered (or perhaps revealed through torture), then all communications to and from that agent could be easily deciphered.


When Leo Marks joined SOE, he quickly recognized some of these limitations and set about to mitigate the risks they posed. Under his guidance, the SOE personnel responsible for decoding agents’ messages from the field mounted a huge effort to decipher the “indecipherables.” Using cryptanalysis, and knowing the typical errors agents made (as well as any individual agent’s weaknesses), they significantly reduced the number of messages that couldn’t be deciphered. This lowered the number of retransmissions, which also reduced the time the radio operators needed to broadcast messages. This, coupled with better training for the agents, resulted in greatly improved reliability and decreased exposure for everyone.

Another major contribution was the use of custom poems. The practice up to that point had favored poems that were easy to memorize because, in many cases, they already were memorized by the agents. Popular poems, famous poems, favorite rhymes from childhood, etc., were all likely sources of an agent’s “personal” code poem. But their very familiarity presented a risk, in that the enemy could simply try the 100 most popular poems and greatly increase their chances of finding a match.

So Marks instructed each agent to create their own poem, known only to themselves and the SOE agents in London. If they didn’t feel up to the task, Marks himself composed many poems and kept them locked away, and could provide one for any agent who needed it. One of these poems later became famous in its own right. “The Life That I Have” was issued to Violette Szabo, who was eventually captured and executed. The poem gained prominence when included in a movie about Szabo, and again later when it was read at Chelsea Clinton’s wedding.


So how did this system work? There were two distinct phases: key generation and encryption.

First, the agent would randomly select five words from their poem. They could do this by flipping coins, rolling dice, or any other similar method. Once selected, the agent needed to indicate to the recipient which words were chosen. To do this, they would send an “indicator group” of five letters, where each letter indicated the position of a key word in the poem. The first word added an “A” to the indicator group, the eighth an “H”, and so forth.

Let’s work an example as we go. Agent X has “The Jabberwocky” as his poem:

'Twas brillig, and the slithy toves
Did gyre and gimble in the wabe.
All mimsy were the borogoves,
And the mome raths outgrabe. 

For our key we’ll pick “the, all, mome, gyre, and ‘twas.” The indicator group, then, is "DNUHA".

After selecting the key words, the agent would write them all together as one word, and number the letters, starting with “A”. So the first A would be number 1, then if there was a second A that would be number 2. Then Bs would be numbered, then Cs, etc. Once the entire key word is numbered, those numbers themselves become the encryption key.

Our key word is therefore "THEALLMOMEGYRETWAS". Numbering first the As, we get:

      1                         2

There are no Bs, Cs, or Ds, so the next letter up is E:

    3 1           4       5     2

Continue until everything is numbered:

T  H E A L L M  O  M  E G Y  R  E T  W  A S
15 7 3 1 8 9 10 12 11 4 6 18 13 5 16 17 2 14

After generating the key, and communicating its elements with the indicator group, the agent must begin the actual encryption of the message. First, write the key across several columns. Then the plaintext message is written left-to-right underneath the key, one letter at a time in each key column. Once written out, the letters are read back by going down each column, in the order of the key numbers over the columns.

Continuing the example from above, we write the key out and then copy the message plaintext below the key, one word per column:

 T  H E A L L M  O  M  E G Y  R  E T  W  A S
 15 7 3 1 8 9 10 12 11 4 6 18 13 5 16 17 2 14
 I  h a v e d e  p  o  s i t  e  d i  n  t h
 e  c o u n t y  o  f  B e d  f  o r  d  a b
 o  u t f o u r  m  i  l e s  f  r o  m  B u
 f  o r d s i n  a  n  e x c  a  v a  t  i o
 n  o r v a u l  t  s  i x f  e  e t  b  e l
 o  w t h e s u  r  f  a c e  o  f t  h  e g
 r  o u n d t h  e  f  o l l  o  w i  n  g X

NOTE — I’ve added a null character (“X”) to the end of the message to ensure that the plaintext fills out every column for all rows. In practice, this probably won’t happen often, but it makes things very easy for this example. See other, much better written explanations of columnar transposition for dealing with such situations.

Then, to encrypt, first read down each column, starting with number 1:

(col 1): vufdvhn 
(col 2): taBieeg 
(col 3): aotrrtu ....

Or, for the entire message:

vufdvhn taBieeg aotrrtu sBleiao
dorvefw ieexxcl hcuoowo enosaed
dtuiust eyrnluh ofinsff pomatre
effaeoo hbuolgx Ieofnor iroatti
ndmtbhn tdscfel

But we’re not done! Do it a second time:

 T  H E A L L M  O  M  E G Y  R  E T  W  A S
 15 7 3 1 8 9 10 12 11 4 6 18 13 5 16 17 2 14
 v  u f d v h n  t  a  B i e  e  g a  o  t r
 r  t u s B l e  i  a  o d o  r  v e  f  w i
 e  e x x c l h  c  u  o o w  o  e n  o  s a
 e  d d t u i u  s  t  e y r  n  l u  h  o f
 i  n s f f p o  m  a  t r e  e  f f  a  e o
 o  h b u o l g  x  I  e o f  n  o r  i  r o
 a  t t i n d m  t  b  h n t  d  s c  f  e l

and assemble the new ciphertext as before, reading down columns 1, then 2, then 3, etc… :

dsxtfui twsoere fuxdsbt Booeteh
gvelfos idoyron utednht vBcufon
hllipld nehuogm aautaIb ticsmxt
eronend riafool vreeioa aenufrc
ofohaif eowreft

Finally, it’s typical to break the message up into five-character groups. Don’t forget to add the 5-character indicator group at the beginning of the message, or the recipient won’t be able to regenerate the message key (unless, of course, you’ve already made arrangements to transmit this information via a different channel). This, then, is the final message:

dnuha dsxtf uitws oeref uxdsb tBooe tehgv elfos idoyr 
onute dnhtv Bcufo nhlli pldne huogm aauta Ibtic smXte 
ronen driaf oolvr eeioa aenuf rcofo haife owref t 

That’s what you’d then hand off to the radio operator, who’d broadcast it on to London based on scheduled times and frequencies. (Of course, since it’s going out over Morse code, capitalization doesn’t happen. I just left them in here because it’s helpful to see how the letters get scrambled).


Decrypting works similarly. First, generate the key, based on knowing the sender’s poem code and reading the indicator group at the start of the message. In our example, using the agent’s poem code “The Jabberwocky” and the indicator group "DNUHA" means the key words are "the all mom gyre twas." Put those together, number the columns as before, and you’ve got the numeric key.

Write the key out across multiple columns, then fill the message in, by columns, starting with 1. That is, in the example, write "dsxtfui" column 1, "twsoere" under column 2, etc. This will eventually give the 2nd table above. Read the intermediate text out by rows, starting at the top (vufdvhnt….). Write that, downwards by columns, into a new grid with the same key row. Now you should have regenerated the first table above, and can simply read the plaintext back out row-by-row.


So, is this exactly how the SOE agents used poem codes? I don’t know for certain, but if anyone can point me to a very solid reference that’d be greatly appreciated. In particular, I’m uncertain whether they used the same key for both encryption steps, or if they removed duplicate words from their poems. I think it’s probably pretty close, from what I read in Marks’ book. Really, the hardest part is documenting how the key generation phase worked.

The key generation could use any of a number of different approaches. In fact, it’s even possible that many of these approaches were all used, with each agent having their own unique variation. This would certainly have added to the security of the system, at the expense of more complexity on the London end of the communications (having to track and associate each particular method with each agent).

Here are some ways they could have mixed up the key generation, just off the top of my head:

Really, there are an infinite number of ways to create the key. From what I’ve read, I think the method presented here is the simplest, and most straightforward, but that doesn’t mean it’s historically accurate.

A slightly different approach to key generation is described here and also here. This differs a bit from details in Marks’ book, but incorporates extra steps to improve security, including agent-specific offsets and signals to indicate duress. For example, using the poem “Mary Had A Little Lamb”:

Let’s assume that the letters chosen are PQRSTU, the odd letters furnish the first ‘key’ and the even letters the second. In our example PRT points to ‘WENT LAMB SURE’ as the first ‘key’. For the second we use QSU so it’s ‘THE WAS TO’. The indicator showing which words were used as ‘keys’ will be PRT filled with two nulls so as to form a 5-letter group (all messages were sent in 5-letter groups), so let’s say PARNT and the final step is to move all the letters forward by using the agents’ secret number. For instance if the number was 45711 then in our example PARNT will change into TFYOU, as each letter moves forward as many positions as indicated by the secret number P+4=T, A+5=F, R+7=Y, N+1=O, T+1=U.

Note that Marks doesn’t say anything about six consecutive letters. On the contrary in his book page 324 he says ‘every poem code message began with a five letter indicator group to show which five words of the poem had been used’.

Ultimately, as long as there’s a repeatable, deterministic method for creating a reasonably random transposition key, and that there’s an easy way to transmit the parameters to generate that key, it doesn’t matter what methods you use. In fact, later in the war they dropped poems altogether and used one-time, pre-shared transposition keys that the agent would tear off a silk sheet. They also went all the way to one-time-pads in some situations, also printed on silk.


Though there are an infinite number of ways that the SOE agents could have derived their encryption keys, I’m inclined to think that the simpler method was what was used. And with luck, I’ve made the mechanics clear here. Even if I’m not close enough that you could actually decode real SOE intercepts, hopefully you’ve got a good idea for the complexity of the system and some of the challenges the agents faced. I certainly recommend reading the book…even if it sometimes becomes a bit sensational, even incredible, I’m certain that it’s as close to the truth as the general public will ever know.

Put away the tin-foil: The Apple unlock case is complicated enough

Apple and the FBI are fighting. The {twitter, blog, media}-‘verses have exploded. And FUD, confusion, and conspiracy theories have been given free reign.

Rather than going into deep technical detail, or pontificating over the moral, legal, and ethical issues at hand, I thought it may be useful to discuss some of the more persistent misinformation and misunderstandings I’ve seen over the last few days.


On February 16, 2016, Apple posted A Message to Our Customers, a public response to a recent court order, in which the FBI demands that Apple take steps to help them break the passcode on an iPhone 5C used by one of the terrorists in the San Bernardino shooting last year.

All this week, Twitter, and blogs, and tech news sites, and mainstream media have discussed this situation. As it’s a very complex issue, with many subtle aspects and inscrutable technical details, these stories and comments are all over the map. The legal and moral questions raised by this case are significant, and not something I’m really qualified to discuss.

However, I am comfortable ranting at a technical level. I’ve already described this exact problem in A (not so) quick primer on iOS encryption (and presented a short talk at NoVA Hackers). One of the best posts particular to the current case can be found at the Trail of Bits Blog, which addresses many of the items I discuss here in more technical detail.

Apple (and many others) have been calling this a “back door,” which may or may not be an over-statement. It’s certainly a step down a slippery slope, whether you consider this a solution to a single-phone case, or a general solution to any future cases brought by any government on the planet. But, again, I’m not interested in discussing that.

Emotions are high, and and knowledge is scarce, which leads to all kinds of crazy ideas, opinions, or general assumptions being repeated all over the internet. I’m hoping that I can dispel some of these, or at least reduce the confusion, or at a bare minimum, help us to be more aware of what we’re all thinking, so that we can step back and consider the issues rationally.

Technical Overview

First, a very high-level description of how one unlocks an iPhone. This is a very complicated system, and the blog posts (and slide deck) that I linked to above provide much better detail than what I’ll go into here. But hopefully this diagram and a short bullet list can give enough detail that the rest of the post will make at least some sense.

Simplified passcode logical flow

To unlock an iPhone (or iPad or iPod Touch):

In later devices (iPhone 5S, iPad 3, and later), the management of the bad guess counter and timeout delays are handled by the Secure Enclave (SE), another processor on the SoC with its own software.

So, if you want to unlock an iPhone, but don’t know the passcode, how can you unlock it?

The last is the easiest, but Apple didn’t make it that easy. To modify the operating system on the device, you first have to defeat the default full-disk-encryption, which is also based on the UID (and beyond the scope of this post), so we’re back to a microscope attack.

Or you could boot from an external hard drive. Unfortunately for hackers and law enforcement (but good for iOS users), the iPhone won’t boot from just any external drive. The external image has to be signed by Apple.

This is exactly what the FBI is asking Apple to do (and, incidentally, a boot ROM bug in iPhone 4 and earlier allowed hackers to do this too, which is how we know it’s possible). The basic approach is this:

Beginning with the iPhone 5S, some of the passcode processing functionality moved into the Secure Enclave, so this attack would need to be modified to remove the lockouts from the SE as well.

Note that these methods still require the passcode to be brute-forced. For a 4-digit number, that can happen in as little as 15 minutes, but for a strong passcode, it can take days, months, or even years (or centuries). So even with a signed boot image, this attack is far from a silver bullet.


That’s (basically) the attack that Apple is being asked to perform. Now to address some of the more confusing points and questions circulating this week:

Just crack the passcode on a super fast password cracking machine! That can’t be done, because the passcode depends on the Unique ID (UID) embedded within the SoC. This UID cannot be extracted, either by software or by electronic methods, so the password-based keys can never be generated on an external system. The brute-force attack must take place on the device being targeted. And the the device takes about 80 milliseconds per guess.

Look at the BUGS, MAN! Yes, iOS has bugs. Sometimes it seems like a whole lot of bugs. Every major version of iOS has been jailbroken. But all these bugs depend on accessing an unlocked device. None of them help with a locked phone.

What about that lockscreen bypass we saw last {week / month / year}? These bypasses seem to pop up with distressing frequency, but they’re nothing more than bugs in (essentially) the “Phone” application. Sometimes they’ll let you see other bits of unencrypted data on a device, but they never bypass the actual passcode. The bulk of the user data remains encrypted, even when these bugs are triggered.

But {some expensive forensics software} can do this! Well, maybe it can, and maybe it can’t. Forensics software is very closely held, and some features are limited to specific devices, and specific operating system versions. One system that got some press last year exploited a bug in which the bad guess counter wasn’t updated fast enough, and so the system could reboot the phone before the guess was registered, allowing for thousands of passcode guesses. (Also, as far as we know, all those bugs have been fixed, so this only works with older versions of iOS).

If Apple builds this, then Bad Guys (or the FBI, which to some may be the same thing) can use this everywhere! Well, not necessarily. Apple could put a check in the external image that verifies some unique identifier on the phone (a serial number, ECID, IMEI, or something similar). Because this would be hard-coded in a signed boot image, any attempts to change that code to work on a different phone would invalidate the signature, and other phones would refuse to boot. (What is true, though, is that once Apple has built the capability, it would be trivial to re-apply it to any future device, and they could quickly find themselves needing a team to unlock devices for law enforcement from all around the world…but that goes back into the cans of worms I’m not going to get into today).

NSA. ‘Nuff said. Who knows? (more on that below)

What about the secret key? Isn’t it likely that the Advanced Persistent Threat has it anyway? If the secret key has been compromised, then, yeah, we’re back to the state we were with iPhone 4 and hacker-generated boot images. But the attacker still needs to brute force the passcode on the target device. And, frankly, if that key has leaked, then Apple has far, far bigger problems on their hands.

Later devices which use the Secure Enclave are safe!! Possibly. Possibly not. I see a few possibilities here:

Can anyone outside of Apple do this?

So, what about the NSA? Or China? Or zero-day merchants? Surely they have a way to do this, right?

We don’t know.

If there’s a way to do this, it would require one (or more) of the following:

All of these are theoretically possible, but none seem terribly likely (other than an OS-level bug on older devices). In fact, the most reasonable (and least disturbing) possibility, to me, is the direct physical attack on the chip. Even that might be preventable, but I Am Not A Chip Designer and could only speculate on how.

Bottom Line

There’s still a lot we don’t know. In fact, I think this list is pretty much what I wrote in 2014:

Much of this we’ll never know, unless Apple explicitly tells us in a new update to their iOS Security Guide. And even then, we’ll probably only have their word for it, because many of these questions can’t be independently verified without that trusted external boot image.

Mobile App Authentication using TouchID and Tidas

Yesterday, the information security company Trail of Bits announced a new service, called Tidas. The service is intended to make it easy for developers to include a password-free authentication experience in mobile apps on the iOS platform. They’ve provided some sample code and a developer Guide / FAQ, and I’ve spent some time looking at it to try and understand how it works. Here are my first impressions.

NOTE: I haven’t actually looked at the full protocol running “in the wild” yet, so it’s quite possible I haven’t fully grokked the system. Take this with a grain of salt. I’ll try to update any egregious misunderstandings, as I become aware of them.

The heart of the Tidas system is a new feature, introduced in iOS 9, which allows for a public / private keypair to be split on an iOS device, with the private key hidden, inaccessibly, in the Secure Enclave. This feature was described in Session 706: Security and Your Apps at the 2015 WWDC (the relevant content begins about 46 minutes into the presentation, at slide 195). In this usage, the private key is never visible to the application, and can in fact never leave the Secure Enclave, even for device backups. The application can send data to the Secure Enclave, with a request to have it signed by the designated private key. The device prompts the user to authenticate with their fingerprint, and if the fingerprint matches, then the private key signs the data and returns the result to the application.

To enroll, the user must first authenticate somehow with the remote service. If their account already exists, they’ll need to log into the service, using a password, 2-factor login, or whatever other mechanisms the application provides. If this is their first time using the service, then no passwords are necessary and their enrollment is simply part of the onboarding process. The device then creates the public / private key pair (using elliptic curve P-256), and sends the public key to the server, which associates it with the user’s account. Future requests are associated with the user by a separate identifier (userid, for example).

Later, when the user wants to log into the account again, the application creates a new request. The documentation I read didn’t seem to indicate that it uses a challenge / response format, but instead, that the application creates its own message to sign. The application appends the current timestamp to the message, and sends a hash of the (message + timestamp) to the Secure Enclave. The phone then prompts the user for their fingerprint, signs the hash, and returns the signature to the application. The inclusion of the timestamp helps prevent against replay attacks using old authentication requests.

The final message sent to the Tidas server, then, includes basic header information, the new message being signed, a timestamp, the SHA1 hash of (message + timestamp), and the signature of that hash.

Tidas Login Data

The server uses the userid (or other identifying information) to look up the valid user, and validates the signature of the message. Then, the server returns a session token to the application, which allows the user to continue using the app without needing to re-authenticate for every action. The duration of this session token is also left up to the developer.

In practice, the mobile application will likely be communicating with the applciation’s server, which uses middleware components to correctly identify the user and pass the userid and the authentication message to the Tidas service, which then responds with a thumbs-up or thumbs-down for the request.

All in all, I think it’s an interesting system. I very much like the fact that it’s using a full-on public / private key system, and especially that the private key is completely inaccessible to users and attackers alike. This neatly avoids one of the primary problems with other authentication systems: compromise of user credentials when servers are hacked. There’s no password on the server to crack, and no “password equivalent” (like a hash or long-lived secret) that can just be extracted and used by an attacker (no “pass the hash” attack).

I’m a little concerned that the message is self-created, though this does eliminate a client-server round trip. I think it may be wise to set some basic standards, or at least very strong recommendations, for the content and format of that validation message. (It’s also possible that such recommendations exist and I just missed them on my first read-through). The use of the timestamp inside the signature should also help to mitigate this concern. Also, it would be nice if the session token was used more like an OAuth access token, signing each request individually, though I suppose there’s no reason that can’t be implemented at the application level.

There are still other problems that Tidas won’t directly improve: adding the service to existing accounts, enrolling additional devices to the same account, and dealing with a lost device or password, all of which have proven to be weak points in most authentication systems. Finally, this is only available on newer iOS devices with TouchID, though I would expect that it could be supported on other platforms with similar capabilities.

In some ways, Tidas feels similar to the FIDO U2F system, which also utilizes public / private key signatures, but relies on dongles, doesn’t utilize fingerprint verification, and has a more strictly-defined protocol.

I’m excited to see this new service, and hope to see it (and similar systems) move forward.

UPDATE I had a nice chat with one of the authors of Tidas, who clarified that the user’s public key is only sent during initial enrollment, and not for subsequent login requests. A separate userid (not included within the Tidas login blob) is used to associate that signed message with an individual user’s account. I’ve updated the details above to reflect this new information.

Blizzard of 2016 Time-lapse

For the last several years, we’ve tried to keep a big “snow stick” out on our deck to capture images of big snowfalls. In particular, the winter of 2009-2010 was exceptional for this, with no fewer than 3 very large storms in our area (including the crazy storm which happened at ShmooCon 2010). That storm dumped nearly 30” over two days at Dulles Airport, just a few miles away from our house.

Today, we’re getting a storm that promises to rival or exceed that storm, with the Capital Weather Gang calling for as much as 40 inches in the “best case” scenario (or worst case, depending).

Capital Weather Gang prediction

So I had to update the snow stick, which previously topped out at 32” or so. It now reaches a full 50”. No way we can break that (if we do, the deck will probably collapse anyway and it won’t matter). I tweeted a picture of the snow stick yesterday, and almost immediately was challenged to post a time-lapse video. Which I thought “no way,” then “maybe,” then “Oh, wait, if I do this….” After a couple hours of playing, I had an old Canon Digital Rebel running off an external power supply, with a Raspberry Pi triggering photos every minute and downloading them to the local SD card. But it crashed after about 5 minutes. I spent some hours last night trying to figure out what was up, but couldn’t make it work — the link between the rPi the camera just stops working after a few photos, whether I use the really cool script I found or just manually capture images.

After conceding defeat on the SLR front, I thought, maybe I could find an iOS app to do this. There must be one. And, sure enough, a few minutes of searching led me to TimeLapse, by xyster.net. I grabbed an iPad 3 from my drawer of crazy old iOS devices, installed it, and figured out how to get it established in the window. At first, I planned to simply tape it to the window, but then the image was framed all wrong (it had to be located above expected snow line, if I’m to get anything). But I realized that it’d just barely fit on the frame of the lower sash, and so it was off to the scrap pile to make a little shelf.

Pretty sure I can remove this when I'm done

It’s actually screwed into the sash (I’m sure we’ll never notice the holes once it’s gone), though I got a little nervous when doing so that I didn’t drive the screws all the way into the windowpane. Another small strip provides a ridge to keep the iPad from falling off. Just below, you can see the edge of a LED strip I had lying around… I cut it in half, linked the two halves together, and taped them to the window, facing outwards. When we tried this light last night, it was strong enough that the snow stick cast a shadow, so hopefully that’ll be enough to keep taking pictures overnight.

It's more secure than it looks. Barely.

The lights and the iPad are both plugged into a power strip resting on the window sill. (I should probably tape the power strip to the wall, or get a USB extender cable, so that it won’t pull the iPad down when it inevitably gets knocked off the sill). I’ll eventually move the strip onto a UPS, which should hopefully let me keep going even during a power failure. (We’re almost certainly going to lose power at some point…I just hope it doesn’t go for too long. There’s only so far I want to take this, you know…and we have an electric snowblower, so no power means sore back.)

Not long after tweeting the picture of the whole rig, someone joked about streaming the images, which was amusing, since I was in the middle of getting live images posted to this site anyway. I have a small Linux box (running on an old 1st generation Apple TV), which I’ve used as a local “photo dump” to sync pictures off my camera. I set up a cron job to rsync the TimeLapse app’s photos off the iPad (it’s a jailbroken device) and onto the linux server, in between synchronization runs, it copies the most recent image here.

Current conditions on my deck (more or less) (updates every 10 minutes or so)

So far, it seems to be (mostly) working, but the app has stopped running twice already — once after only 5 minutes, and again after an hour or so. I don’t know if the device is getting an alert and popping out of the app, or if it’s because it’s jailbroken, or if it’s something else altogether. If I can figure out how to send a text message from my linux box, I can always have it alert me if the most recent sync doesn’t seem to have grabbed any new images. If I get that working, I’ll be sure to update here.

Hopefully I can work out these kinks and get a nice video…if it runs every 2 minutes, then that’s about 1 second per hour at 30 fps, so this’ll be a nice minute or two video once it’s all done.

Update Okay, I still don’t know why the app is crashing. It actually died while Andrea was looking right at it — took a picture, went black, returned to the iPad springboard. Dunno. I wrote a simple python script that looks for the oldest picture that’s been synced from the iPad, and if it’s more than 11 minutes old, it calls another script (oysttyer - a command-line perl Twitter app), which sends me a DM on Twitter. Now I just have to make my phone make a really loud noise for DMs from that account so it’ll wake me up overnight, if the app needs restarting.

Update Update Looks like there’s some kind of memory problem that’s causing the app to reliably crash after an hour of use. However, since I’m able to detect the crash (well, the lack of updates) pretty easily, I’ve now added a remote restart. So whenever it crashes, the pictures get old, my script notices nothing new’s coming through, and it re-opens the app. Yay.

I’ve also stitched together the first 8 hours of video and put it up on YouTube. It gets a little dark towards the end — the LEDs help, but it’s still pretty dim out there, even with all the skyglow reflected in the snow. When it’s all done I’ll see what I can do to make the light levels more consistent across the whole video. Oh, and my brother created a Twitter account which simply scrapes the current image from the blog and tweets it.

DLP Considered Harmful - A Rant about Reliable Certificate Pinning


[Note: Yes, I understand the point of DLP. Yes, I’m being unrealistically idealistic. I still think this is wrong, and that we do ourselves a disservice to pretend otherwise.]

The Latest Craziness

It is happening again. A major computer manufacturer (this time, Dell, instead of Lenovo) shipped with a trusted root TLS CA certificate installed on the operating system. Again, the private key was included with the certificate. So now, anyone who wants to perform a man-in-the-middle attack against users of those devices can easily do so.

Any domain, any site (Image by Kenn White (@kennwhite))

But as shocking as that may have been, what comes next may surprise you!

Browsers let local certs override HPKP

Data Loss Prevention and Certificate Pinning

It’s (reasonably) well known that many large enterprises utilize man-in-the-middle proxies to intercept and inspect data, even TLS-encrypted data, leaving their networks. This is justified as part of a “Data Loss Prevention” (DLP) strategy, and excused by “Well, you signed a piece of paper saying you have no privacy on this network, blah blah blah.”

However, I had no idea that browser makers have conspired to allow such systems to break certificate pinning. (and apparently I wasn’t the only one surprised by this).

HPKP Wrecked

Certificate pinning can go a long way to restoring trust in the (demonstrably broken) TLS public key infrastructure, ensuring that data between an end user and internet-based servers are, in fact, properly protected.

It’s reasonably easy to implement cert pinning in mobile applications (since the app developer owns both ends of the system — the server and the mobile app), but it’s more difficult to manage in browsers. RFC 7469 defines “HPKP”, or “HTTP Public Key Pinning,” which allows a server to indicate which certificates are to be trusted for future visits to a website.

Because the browser won’t know anything about the remote site before it’s visited at least once, the protocol specifies “Trust on First Use” (TOFU). (Unless such information is bundled with the browser, which Chrome currently does for some sites). This means that if, for example, the first time you visit Facebook on a laptop is from home, the browser would “learn” the appropriate TLS certificate from that first visit, and should complain if it’s ever presented with a different cert when visiting the site in the future, like if a hacker’s attacking your connection at Starbucks.

But some browsers, by design, ignore all that when presented with a trusted root certificate, installed locally:

Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor. A key result of this policy is that private trust anchors can be used to proxy (or MITM) connections, even to pinned sites. "Data loss prevention" appliances, firewalls, content filters, and malware can use this feature to defeat the protections of key pinning.

We deem this acceptable because the proxy or MITM can only be effective if the client machine has already been configured to trust the proxy’s issuing certificate — that is, the client is already under the control of the person who controls the proxy (e.g. the enterprise’s IT administrator). If the client does not trust the private trust anchor, the proxy’s attempt to mediate the connection will fail as it should.

What this means is that, even when a remote site specifies that a browser should only connect when it sees the correct, site-issued certificate, the browser will ignore those instructions when a corporate DLP proxy is in the mix. This allows the employer’s security team to inspect outbound traffic and (they hope) prevent proprietary information from leaving the company’s network. It also means they can see sensitive, personal, non-corporate information that should have been protected by encryption.

This Is Broken

I, personally, think that’s overstepping the line, and here’s why:

[ranty opinion section begins]

The employer’s DLP MITM inspecting proxy may be an untrusted third party to the connection. Sure, it’s trusted by the browser, that’s the point. But is it trusted by the user, and by the service to which the user is connecting?

If, for example, a user is checking their bank account from work (nevermind why, or whether that’s even a good idea). Does the user really want to allow their employer to see their bank password? Because they just did. Does the bank really want their customer to do that? Who bears the liability if the proxy is hacked and banking passwords extracted? The end-user who shouldn’t have been banking at work? The bank? The corporation which sniffed the traffic?

A corporation has some right to inspect their own traffic, to know what’s going on. But unrelated third parties also have a right to expect their customers’ data to be secure, end-to-end, without exception. If this means that some sites become unavailable within some corporate environments, so be it. But the users need be able to know that their data is secure, and as it stands, that kind of assurance seems to be impossible to provide.

Users aren’t even given a warning that this is happening. They’re told it could happen, when they sign an Acceptable Use Policy, but they aren’t given a real-time warning when it happens. They deserve to be told “Hey, someone is able to access your bank password and account information, RIGHT NOW. It’s probably just your employer, but if you don’t trust them with this information, don’t enter your password, close the browser, and wait until you get to a computer and network that you personally trust before you try this again.”

SSL Added And Removed Here

[end ranty section]

It’s Bigger Than Just The Enterprise

Unfortunately, it’s not just large corporations which are doing this kind of snooping. Just a few days ago, I was at an all-night Cub Scout “lock-in” event for my eldest son, at a local volunteer fire department. They had free Wi-Fi. Great! I’m gonna be here all night, might as well get some work done in the corner. Imagine my surprise when I got certificate trust warnings from host “”. The volunteer fire department was trying to MITM my web traffic.

Fortunately, they didn’t include any “click here to install a certificate and accept our Terms of Use” kind of captured portal, so the interception failed. If it had, I certainly wouldn’t have used the connection (and as it was, I immediately dropped it and tethered to my phone instead). But how many people would blindly accept such a certificate? How many “normal people” are putting their banking, healthcare, email, and social media identities and information at risk through such a system, every day? This sort of interception has been seen at schools, on airplanes, and many other places where “free” Wi-Fi is offered.

In my job, I frequently recommend certificate pinning as a vital mechanism to ensure that traffic is kept secure against any eavesdropper. Now, suddenly, I’m faced with the very real possibility that there’s no point, because we’re undermining our own progress in the name of DLP. Pinning can make TLS at least moderately trustworthy again, but if browsers can so easily subvert it, then we’re right back where we started.

Finally, though I’m not usually one to encourage tin foil hat conspiracy theories…with all the talk about companies taking the maximum possible steps to protect their users’ data, with iPhone and Android encryption and the government complaining about “going dark”… a DLP pinning bypass provides an easy way for the government to get at data that users might otherwise think is protected. Could the FBI, or NSA, or <insert foreign intelligence or police force> already be requesting logs from corporate MITM DLP proxies? How well is that data being protected? Who else is getting caught up in the dragnet?

Cognitive Dissonance FTW

On the one hand, we as an industry are:

But at the same time, we:

I think this is a lousy situation to be in. Who do we fight for? What matters? And how do we justify ourselves when we issue such contradictory guidance? How can we claim any moral high ground while fighting against government encryption back doors, when we recommend and build them for our own customers? How can our advice be trusted if we can’t even figure this out?

I hope and believe that in the long run, users and services will push back against this. (And, as I said at the beginning, I know that I’m probably wrong.) I suspect it will begin with the services — with banks, healthcare providers, and other online services wanting HPKP they can trust, corporate DLP polices be damned. Who knows, maybe this will be the next pressure point Apple applies.

When that happens, I just hope we can offer a solution to the data loss problem that doesn’t expect a corporation to become the NSA in order to survive.

Thoughts on CyberUL and Infosec Research

For the past year or so, I’ve been thinking about the information security research space. Certainly, with the mega-proliferation of security conferences, research is Getting Done. But is it the right kind of research? And is it of the right quality?

This has recently become a hot topic, since .mudge tweeted on June 29:

Goodbye Google ATAP, it was a blast.

The White House asked if I would kindly create a #CyberUL, so here goes!

We’ve also seen increased attention on Internet of Things, and infosec in general, from the “I Am The Cavalry” effort, and more recently, the expansion of research at Duo Labs and elsewhere.

So this seems like a good time to jot down some of my thoughts.

CyberUL and traditional research

CyberUL itself

First, the idea of an “Underwriter’s Laboratories” for infosec, or “CyberUL”: I think most people agree that it’s a good idea, at its core. John Tan outlined such a service back in 1999, and it’s been revisited many times since. However, many issues remain. I’m certainly not the first to bring these points up, but for the sake of discussion, here are some high-level problems.

For one thing, certifying (or in UL parlance, “listing”) products is difficult enough in the physical space, but even harder in CyberSpace. Software products are a quickly moving target, and it’s just not possible to keep up with all the revisions to product firmware, both during design and through after-sale udpates.

Would a CyberUL focus on end-user products, such as the “things” we keep hooking up to the Internet, or would it also review software and services in general? What about operating systems? Cloud services?

Multiple certifications of one form or another already exist in this space. The Common Criteria, for example, is very thorough and formalized. It’s also complicated, slow, and very expensive to get. The PCI and OWASP standards set bars for testers to assess against, but the actual mechanisms of testing may not be consistent across (or even within) organizations.

Finally, there’s the question of how deep testing can go. Even with support from vendors, fully understanding some systems is a daunting undertaking, and comprehensive product evaluations may require significant resources.

Ultimately, I’m afraid that a CyberUL may suffer from many of the same problems that “traditional” information security testing faces.

So, what about traditional testing?

Much (if not most) testing is paid for by the product’s creator, or by some 3rd party company considering a purchase. The time and scope of such testing is frequently limited, which drastically curtails the depth to which testers can evaluate a product, and can lead to superficial, “checkbox” security reviews. This could be especially true if vendors wind up, to be honest, frantically checking the “CyberUL” box in the last month prior to product release.

Sometimes, testing can go much deeper, but ultimately they’re limited by whoever’s paying for it. If they’ll only pay for a 2-week test, then a 2-week test is all that will happen.

Maybe independent research is the answer?

There’s obviously plenty of independent research, not directly paid for by customers. However, because it’s not paid for…it generally doesn’t pay the testers’ bills in the long term.

Usually, this work comes out of the mythical “20%” time that people may have to work on other projects (or 10%, or 5%, or just “free time at night”). If research is a tester’s primary function, then that dedicated work is often kept private: its goal is to benefit the company, sell vulnerabilities, improve detection products, etc.

Firms which pay for truly independent and published research are vanishingly rare. Today’s infosec environment steers testers towards searching for “big impact” vulnerabilities, while also encouraging frequent repeats of well-trodden topics. I see very little research into “boring” stuff: process and policy, leading-edge technologies, general analysis of commodity products, etc.

What would I like to see done?

In an ideal world, with unlimited resources, what could a company focused on independent information security research accomplish?

Manage research

They could perform a research-tracking function across the community as a whole: Manage a list of problems in need of work, new and under-researched issues, longer-term goals, even half-baked pie-in-the-sky ideas.

The execution of this list of topics could be left open for others to take on, or worked on in-house (or even both — some problems will benefit from multiple, independent efforts, confirming or refuting one another’s results).

The company could even possibly provide funding for external research efforts: Cyber Fast Track reborn!

Perform original research

At its core, though, the company would be tasked with performing new research. They’d look at current products, software, and technology. The focus wouldn’t be simply finding bugs, but also understanding how these systems work. Too many products are simply “black boxes,” and it’s important to look under the hood, since even systems which are functioning properly can present a risk. How many of today’s software and cloud offerings are truly understood by those who sign off on the risks they may introduce?

We occasionally see product space surveys (for example, EFF’s Secure Messaging Scorecard). We need more efforts like that, with sufficient depth of testing and detailed publication of methods and results, as well as regular and consistent updates. Too often such surveys are completed and briefly publicized, generating a few sales for the company which performed it, and then totally forgotten.

I’d also like to see generalized risk research across product categories — for example, what kinds of problems do Smart TVs or phone-connected door locks create? I don’t mean a regular survey of Bluetooth locks (which might be useful in itself) but a higher-level analysis of the product space, and potential issues which purchasers need to be aware of.

Specific product testing could also be an offered service, provided that the testing permits very deep reviews without significant time limitations, and that the results, regardless of outcome, be published shortly after the conclusion of the effort (naturally, giving the vendor reasonable time to address any problems).

Information sharing

And important but currently underutilized function is “research about research.” The Infosec Echo Chamber (mostly Twitter, blogs, and a few podcasts) is great about talking about other research and findings, but not very good at critically reviewing and building upon that work.

We need more methodical reviews of existing work, confirming and promoting findings when appropriate, and correcting and improving the research where problems are discovered. Currently, those best able to provide such analysis are frequently busy with paying work, and so valuable insights are delayed or lost altogether.

Related to this is doing a better job of promoting and explaining research, findings, and problems, both within the community and also to the media in general. Another related function would be managing a repository, or at least a trusted index, of security papers, conference slides, and other such information.

Tracking broader industry trends

The Verizon Data Breach Investigation Report (DBIR) provides an in-depth annual analysis of data breaches. Could the same approach be used for, say, an annual cross-industry “Bug Report,” identifying and analyzing common problems and trends? [or really, any other single topic…I don’t know whether a report focused on bugs would be worthwhile.]

The DBIR takes a team of experts months to collect, analyze, and prepare — expanding that kind of report into other arenas is something that can’t be undertaken without a significant commitment. An organization dedicated to infosec research may be among the few able to identify the need for, and ultimately deliver, such tightly-focused reporting.

Shaping research in general

Finally, I (and many others, I believe) think that the industry needs a more structured and methodical approach to security research. An organization dedicated to research can help to develop and refine such methodologies, encouraging publication of negative findings as well as cool bugs, emphasizing the repeatability of results, and guaranteeing availability of past research. The academic world has been wrestling with this for decades, but the infosec community has only begun to transition from “quick and dirty” to “rigorous and reliable” research.

How can we do this?

These goals are difficult to accomplish under our current research model: Lack of dedicated time and availability for ad-hoc work are just two of the biggest problems. Breadth, depth, and consistency of testing, and long-term availability of results, are among the other details we haven’t yet worked out.

A virtual team of volunteers might work, but they’d still be relying on stolen downtime (or after-hours work). Of course, they’d also have to worry about conflicts of interest (“Will this compete with our own sales?” and “Don’t piss off our favorite customer.” being two of my favorites.) Plus, maintaining consistency would be an issue, as team members drift in and out.

A bug-bounty kind of model might be possible, like the virtual team but even more ad-hoc (“Here’s a list of things we need to do. Sign up for something that interests you!”), and with predictably more logistical and practical problems.

Plus, for either virtual approach, you’d still need some core group to manage everything.

Ultimately, I think a non-profit company remains the only way to make this happen. This would allow the formation of a core, dedicated team of researchers and administrators. They could charge vendors for specific product tests, and possibly even receive funding from industry or government sources, though keeping such funding reliable year after year will probably be a challenge.

John Tan, author of the 1999 CyberUL paper, updated his thoughts earlier this month. A key quote, which I think drives to the heart of the problem:

"If your shareholder value is maximized by providing accurate inputs for decision making around risk management, then you're beholden only to the truth." 

Any company which can keep “Provide risk managers the best data, always” as a core mission statement, and live up to it, will, I think, be on the right track.

So, can this work?

I honestly don’t know.

There are many things our community does well with research, but a lot which we do poorly, or not at all. An independent company that can focus on issues like those I’ve described could have a significant positive impact on the industry, and on security in general. But it won’t happen easily.

According to John Tan’s initial paper, it took 30 years of insurance company subsidies before Underwriters Laboratories could reach a level of vendor-funded self-sufficiency. We don’t have that kind of time today. And the talent required to pull this off wouldn’t come cheaply (and, let’s face it, this is probably the kind of dream job that half the speakers at Black Hat would love to have, so competition would be fierce).

If anyone can run with this, my money would definitely be on Mudge. He’s got the knowledge, and especially the experience of running Cyber Fast Track, not to mention the decades of general information security experience behind him. But he’s definitely got his work cut out for him.

Hopefully he’ll come out of stealth mode soon. I’d love to see what we can do to help.

Salt as a Service: Interesting approach to hashing passwords

A new service was just announced at the RSA conference that takes an interesting approach to hashing passwords. Called “Blind Hashing,” from TapLink, the technology is fully buzzword-compliant, promising to “completely secure your passwords against offline attack.” Pretty grandiose claims, but from I’ve been able to see in their patent so far, it seems like it has some promise. With a few caveats.

Traditionally, passwords are hashed and stored in place. First we had the the Unix cyrpt() function, which, though it was specifically designed to be “slow” on systems at the time, is now hopelessly outdated and should be killed with fire at every opportunity. That gave way to unsalted MD5-based hashes (also a candidate for immediate incendiary measures), salted SHA hashes, and today’s state of the art functions bcrypt, scrypt, and PBKDF2. The common goal throughout this progression of algorithms has been to make the hashing function expensive, in either CPU time or memory requirements (or both), thus making a brute force attack to guess a user’s password prohibitive.

So far, we seem to have accomplished that goal, but a downside is that a slow hash is still, well, slow. Which can potentially add up, when you’ve got a site that processes huge numbers of logins every day.

The “Blind Hashing” system takes a different approach. Rather than handling the entire hash locally, the user’s password is, essentially, hashed a second time using data from a cloud-based service. Here’s an excerpt from the patent summary:

A blind hashing system and method are provided in which blind hashing is used for data encryption and secure data storage such as in password authentication, symmetric key encryption, revocable encryption keys, etc. The system and method include using a hash function output (digest) as an index or pointer into a huge block of random data, extracting a value from the indexed location within the random data block, using that value to salt the original password or message, and then hashing it to produce a second digest that is used to verify the password or message, encrypt or decrypt a document, and so on. A different hash function can be used at each stage in the process. The blind hashing algorithm typical runs on a dedicated server and only sees the digest and never sees the password, message, key, or the salt used to generate the digest.

Thinking through the process, here’s one way this might work:

Put in a more functional notation, this might look like:

Salt1 = salt_lookup(Userid)
Hash1 = Hash(Salt1, Password)
Salt2 = remote_blind_hash_lookup(Hash1)
Hash2 = Hash(Salt2, Hash1)

In the event of a compromise on the server, the attacker may recover all the Salt1 and Hash2 values. However, they will not be able to retrieve Salt2 without the involvement of the remote blind hash service. So a brute force attack will require cycling through all possible passwords and, for each password tested, requesting Salt2 from the remote service. This should, in theory, be significantly slower than a local hash / salt computation, and can also be rate-limited at the service to further protect against attacks.

On its surface, this seems a pretty solid idea. The second salt is deterministically derived from the first hash, but not in an algorithmic manner, so there isn’t a short-circuit that allows for immediate recovery of the salt. The database used to store Salt2 values is too large to be copied by an attacker. And the round trip process is (presumably) too slow to be practical for a brute force attack. Finally, the user’s password isn’t actually sent to the blind hash lookup service, only a hash of the password (salted with a value that is not sent to the service).

An attacker who compromises the (website) server gains only a collection of password hashes that are uncrackable without the correct password and the cooporation of the blind hash service. If they are able to collect all blind hash responses, they could build a dictionary of secondary salts to use in brute force attacks, but that would still be very slow (for a large site), as each password tested would be multiplied by the length of this secondary salt list. (Of course, if they can intercept the blind hash response data, then the attacker can probably also intercept the initial login process and just grab the passwords in plaintext.) Finally, an attacker who compromises the blind hash service gains access to a database too large to exfiltrate, and to an inbound stream of passwords hashed with unknown salts.

So in theory, at least, I can’t see anything seriously wrong with the idea.

But is it worth it? The only argument I’ve heard against “slow” hash algorithms like bcrypt or scrypt is that it may present too big a load to busy sites. But wouldn’t the constant communication with the blind hash service also present a fairly large load, both for CPU and especially for network traffic? What happens if the remote service goes down, for example, because of a DDOS attack, or network problems? This service protects against future breakthroughs that make modern hash algorithms easy to brute force, but I think we already know how to deal with that eventuality.

I think the biggest problem we have today, with regards to securely hashing passwords, isn’t the technology available, but the fact that sites still use the older, less secure approaches. If a site cares enough to move to a blind hash service, they’d certainly be able to move to bcrypt. If they haven’t already moved away from MD5 or SHA hashes, then I really don’t see them paying for a blind hashing service, either.

In the end, though I think it’s a very interesting and intriguing idea, I’m just not sure I see anything to recommend this over modern bcrypt, scrypt, or PBKDF-based password hashes.

Lenovo, CA Certs, and Trust

It’s been a fun week for information security: @yawnbox - A Bad Week

Arguably one of the more interesting developments (aside from the SIM thing, which I’m not even going to touch) was the decision by Lenovo to pwn all of their customers with a TLS Man-In-The-Middle attack. The problem here was two-fold: That Lenovo was deliberately snooping on their customer’s traffic (even “benignly,” as I’m sure they’re claiming), and that the method used was trivial to put to malicious use.

Which has me thinking again about the nature of the Certificate Authority infrastructure. In this particular case, Lenovo laptops are explicitly trusting sites signed with a private key that’s now floating around in the wild, ready to be abused by just about anyone. But it’s more than just that — our browsers are already incredibly trusting.

On my Mac OS X Yosemite box, I count (well, the Keychain app counts, but whatever) 214 different trusted root certificate authorities. That means that any website signed by any of those 214 authorities…or anyone that those authorities have delegated as trustworthy….or anyone those have trusted…will be trusted by my system.

That’s great, if you trust the CAs. But we’ve seen many times that we probably shouldn’t. And even if you do trust the root CAs on your system, there are other issues, like if a corporation or wifi provider prompts the user to install a custom MITM CA cert. (Or just MITMs without even bothering with a real cert).

I’ve been trying to bang the drum on certificate pinning for a while, and I still think that’s the best approach to security in the long run. But there’s just no easy way for end users to handle it at the browser level. Some kind of “Trust on First Use” model would seem to make sense, where the browser tracks the certificate (or certificates) seen when you first visit a site, and warns if they change. Of course, you have to be certain your connection wasn’t intercepted in the first place, but that’s another problem entirely.

Some will inevitably argue that ubiquitous certificate pinning will break applications in a corporate environment, and yes, that’s true. If an organization feels they have the right to snoop on all their users’ TLS-secured traffic, then pinned certificates on mobile apps or browsers will be broken by those proxies. Oh, well. Either they’ll stop their snooping, or people will stop using those apps at work. (I’m hoping that the snooping goes away, but I’m probably being naïve).

When a bunch of CA-related hacks and breaches happened in 2011, we saw a flurry of work on “replacements,” or at least enhancements, of the current CA system. A good example is Convergence, a distributed notary system to endorse or disavow certificates. There’s also Certificate Transparency, which is more of an open audited log. I think I’ve even seen something akin to SPF proposed, where a specific pinned certificate fingerprint could be put into a site’s DNS record. (Of course, this re-opens the whole question of trusting DNS, but that’s yet another problem).

But as far as I know, none of these ideas have reached mainstream browsers yet. And they’re certainly not something that non-security-geeks are going to be able to set up and use.

So in the meantime, I thought back to my post from 2011, where I have a script that dumps out all the root CAs used by the TLS sites you’ve recently visited. Amazingly enough, the script still works for me, and also interestingly, the results were about the same. In 2011, I found that all the sites I’ve visited eventually traced back to 20 different root certificate authorities. Today, it’s 22. (and in both cases, some of those are internal CAs that don’t really “count”). (It’s also worth noting — in that blog post, I reported that I had 175 roots on my OS X Lion system. So nearly 40 new roots have been added to my certificate store in just 3 years).

So of the 214 roots on my system, I could “safely” remove 192. Or probably somewhat fewer, since the history file I pulled from probably isn’t that comprehensive (and my script didn’t pull from Safari too). But still, it helps to demonstrate that a significantly large percentage (like on the order of 90%) of the trust my computer has in the rest of the Internet is unnecessary in my usual daily use.

Now, if I remove those 190ish superfluous roots, what happens? I won’t be quite as vulnerable to malware or MITM attacks using certs signed by, say, an attacker using China’s CA. Or maybe the next time I visit Alibaba I’ll get a warning. But I’d bet that most of the time, I’ll be just fine. Of course, if I do hit a site that uses a CA I’ve removed, I’d like the option to put it back, which simply brings me back to the “Trust on First Use” certificate option I mentioned earlier. If we’re to go that route, might just as well set it up to allow for site-level cert pinning, rather than adding their cert provider’s CA, to “limit the damage” as it were. (Otherwise, over time, you’d just be back to trusting every CA on the planet again).

And of course, even if I wanted to do this, there’s no (easy) way to do this on my iOS devices. And the next time I got a system update, I’d bet the root store on my system would be restored to its original state anyway (well, original plus some annual delta of new root certs).

So nearly four years on since the Comodo and Diginotar hack (to say nothing of private companies selling signing wildcard certificates), and we still haven’t “reshaped browser security”.

What’s it going to take, already?

Bypassing the lockout delay on iOS devices

Apple released iOS 8.1.1 yesterday, and with it, a small flurry of bugs were patched (including, predictably, most (all?) of the bugs used in the Pangu jailbreak). One bug fix in particular caught my eye:

Lock Screen
Available for:  iPhone 4s and later, iPod touch (5th generation) and later, iPad 2 and later
Impact:  An attacker in possession of a device may exceed the maximum number of failed passcode attempts
Description:  In some circumstances, the failed passcode attempt limit was not enforced. This issue was addressed through additional enforcement of this limit.
CVE-2014-4451 : Stuart Ryan of University of Technology, Sydney

We’ve seen lock screen “bypasses” before (that somehow kill some of the screen locking application and allow access to some data, even while the phone is locked). But this is the first time I’ve seen anything that could claim to bypass the passcode entry timeout or avoid incrementing the failed attempt count. What exactly was this doing? I reached out to the bug reporter on Twitter (@StuartCRyan), and he assured me that a video would come out shortly.

Well, the video was just released on YouTube, and it’s pretty interesting. Briefly:

This doesn’t appear to reset the attempt count to zero, but it keeps you from waiting between attempts (which can be up to a 60 minute lockout). It also doesn’t appear to increment the failure count, either, which means that if you’re currently at a 15 minute delay, the device will never go beyond that, and never trigger an automatic memory wipe.

Combining this with something like iSEC Partners’ R2B2 Button Basher could easily yield something that could just carefully hammer away at PINs 24x7 until a hit is found (though it’d be SLOW, like 1-2 minutes per attempt….)

Why this even works, I’m not sure. I had presumed that a flag is set somewhere, indicating how long a timeout is required before the next unlock attempt is permitted, which even persists through reboots (under normal conditions). One would think that this flag would be set immediately after the last failed attempt, but apparently there’s enough of a delay that, working at human timescales, you can reboot the phone and prevent the timeout from being written.

Presumably, the timeout and incorrect attempt count is now being updated as close to the passcode rejection as possible, blocking this demonstrated bug.

I may try some other devices in the house later, to see how far back I can repeat the bug. So far, I’ve personally verified it on an iPhone 5S running 8.1.0, and an iPad 2 on 7.0.3. Update: I was not able to make this work on an iPod Touch 4th generation, with iOS 6.1.6, but it’s possible this was just an issue with hitting the buttons just right (many times it seemed to take a screenshot rather than starting up the reboot). On the other hand, the same iOS version (6.1.6) did work on an iPhone 3GS, though again, it took a few tries to make it work.