Trace Alignment!

Over the past week, I have progressed my work in power analysis of smartcards. By replacing my original power shunt (1.5R) with a higher resistance (4.7R) alternative, I’m now able to retrieve significantly higher resolution power traces, even at 1.8V. Through the lens of a software low pass filter, we can clearly see the rounds of some cryptographic operation (suspected AES, rest of trace snipped for brevity):

Unfortunately, this isn’t quite ready for further analysis:

I spent some time investigating the possible strategies for realigning the traces time-wise. Two approaches were identified:

  • Firstly, the Sum of Absolute Difference approach can be used. A “window” of traces is selected, and is shifted back and forward until the sum of absolute differences with a reference window is minimized. This is the approach taken by ChipWhisperer control software.
  • Alternatively, the window can be shifted back and forth until the correlation coefficient between it and a reference window is maximised, similar to a CPA key matching attack. This is probably the easier of the two approaches to independently implement, with numpy.corrcoef doing the heavy lifting for us

Both strategies should produce roughly the same results, though I suspect a Sum-of-Absolute-Difference match will result in more fine-grained control over exactly how much a trace can differ from a reference, though this may result in false negative matches when low-frequency fluctuations in power supply present themselves (aka why USB, why), causing a sum of absolute difference to exceed a chosen maximum threshold.

Both are relatively simple to implement, and the tooling is now available as part of the fuckshitfuck toolkit, as preprocessor.py. Note that this tool only implements “soft matching”: that is, it will use the selected matching strategy only up to a point, if it can’t find a match within a certain number of samples, it will discard the offending trace.

Window selection is also crucial, particularly when looking at cryptographic operations with repeating rounds that are similar in execution. For the below trace, I took a two-pass approach: firstly, I matched the “prefix” (second “column”) loosely, matching to a minimum correlation coefficient of 0.7. I then matched the third/fourth columns more tighly, with 0.95 minimum correlation – this managed to mostly (visual inspection) eliminate the temporal misalignment of the remaining traces.

Also, the tooling’s a bit slow (though it’s questionable whether this is a problem for a one-off preprocessing activity). Initial attempts at multi-threading the code met with failure, due to what I suspect is an IO bottleneck (in other words, a single huge memory mapped array of traces).

My attempts at massaging out a working Ki+OPc value from the smartcard’s MILENAGE (I suspect? I don’t think I can really “check” unless I go work for Gemalto or GCHQ), but thin rays of hope shine through the signal-to-noise ratio. Observe:

What exciting times we live in!

About Norman

Sometimes, I write code. Occasionally, it even works.
This entry was posted in Bards, Computers, Jesting. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.