On AppSec Day…

Last week, I had the pleasure of attending OWASP AppSec day in Melbourne. This was a productive day. On the flight to Melbourne, I stumbled upon a hypothesis as to why my initial attempt at differential electromagnetic analysis of some AES traces didn’t work (I figured it was due to the non-instantaneous nature of magnetic fields), which I contradicted later through further experimentation and analysis of existing capture data – all hail the advent of cheap, portable storage.

Furthermore, I was able to refine a script for trace acquisition for Rigol scopes via Ethernet, using a modified github/pklaus/ds1054z module as the communications back-end – this is now mostly stable, and semi-consistent with other parts of the fuckshitfuck toolkit. Here it is in action against 100(!!) power traces of AES on an ATMega328p (I think we ended up missing one byte!):

There’s also some progress towards a template-based attack, though I’m not sure how realistically applicable this is, given variances such as horizontal jitter (still, based on the below, cutting out 1000 data points out of 8000 is still a solid improvement):

I must admit, there is a certain joie de vivre in this work that is, broadly speaking, absent elsewhere – compared to this, I really don’t care whose XSS is where and which crayon we need for the risk matrix.

On a somewhat more serious note, a few things were telling from this day, which are worth noting down:

The Top Ten are Dead, Long Live the Top Ten

Every single talk spoke about moving security to the left, but most of the industry is still attempting to fix the OWASP Top Ten. Realistically, the OWASP Top Ten has become the OWASP Top One or Two with the introduction of modern frameworks and sensible hosting, which voids entire categories of vulnerabilities. When’s the last time you saw someone construct a SQL query with string concatenation?

When’s the last time you saw someone (who’s day job is developer – our code is duct-tape and we know it) build a SQL query manually, as opposed to fetching data through a pre-built framework?

Yet let’s take a look at the OWASP wiki for how to protect against SQL Injection vulnerabilities:

Who realistically uses this stuff in 2018? No really, please do leave a comment if these principles are still applicable to production code where you are: I need to know.

On the other hand, where is the advice on how to prevent the logic-based bugs and debug content which are generally the easiest way into a web application? More importantly, is it even feasible to package the concept of “don’t fuck up logic” into a set of codifiable rules, which people will inevitably ask for? (My initial inclination on this is “no”).

What use is all this when people still sticky tape their passwords on their POS systems to the top of their monitors, for anyone walking past to read?

Most Dangerous Adversary 2018: Vendors

One of the speakers mentioned that the most dangerous adversary of the year was the vendor. This holds particularly true in application security, where the space is poorly defined enough that it is simple to deceive unsuspecting marks – failed comedy shows like “Next-Generation WAF” and “Self-Protecting Software” have taken root, like some kind of malignant tumor. Thinking of this contemptible filth, my mind cannot help but drift to the image of a serpent in the afternoon sun, poised to strike.

If these salespeople played EVE Online, surely they would be highsec miners.

What happened to genuinely building a good product, and simply letting the product speak for itself (and making datasheets publically available as technical reference material)? Imagine how much faster the market would mature, if we all followed the Atlassian model!

Integration and You: The Untold Story

I also was able to spend some time speaking with fellow… “appsec practitioners”. I got a sense that every product on the market had miniscule signal-to-noise ratios, and the best way to use a “code security” product was to simply drop all non-critical issues identified (i.e. switch a product on, and only require human interaction on critical issues). This matches with my experience speaking to developers, who overwhelmingly reject security review tools on the basis of false positives.

This is a tricky one: as security folks, we generally don’t “eat our own dog food”, so to speak: without running large enterprise codebases (which we don’t write) through enterprise appsec tools, we don’t know where the realistic pain points are. Getting security people to write copious amounts of enterprise code isn’t necessarily the solution either – broadly speaking, it’s not necessary.

In Conclusion

While there’s no clear fixes to a lot of the stuff, I think a lot of the above is reasonably applicable to any enterprise application security programme, particularly in the form of industry-accepted best practice as opposed to ISC^2-style standards.

See you all in BSides Delhi CTF.

Posted in Bards, Jesting | Leave a comment

Power Leakage Modelling of DES

This weekend, as part of my efforts to advance my learning of power analysis, I attempted to create my own power leakage model of DES and recover a key without referring to previous work. My targets were the ATMega328p board, used for previous experiments, as well as a PIC24F target board. Both ran with DES code from avr-crypto-lib.

I must admit that I cheated somewhat, by using triggers at the start of DES round 1. Unfortunately, I haven’t yet been able to compensate for some high-amplitude, low-frequency noise I’m seeing in one target (I’ve eliminated the power supply as a cause, I’ve yet to try with a magnetic probe) – without this, I am almost certain trace alignment would be meaningless.

While conceptually trivial, implementing this was a good exercise in selecting and writing my own leakage model.

Recovering Key Fragments

We begin our analysis by looking at avr-crypto-lib’s implementation of the DES algorithm. In diagram form, it looks like this (only the bits we need, half the round function is omitted):

The key is split into 8 6-bit key fragments for each round, and there are 16 xor-sbox-mix operations (8 against each half of the plaintext) for each round of DES (i.e. 16 sub-rounds per round). This comes up clearly in a power trace (from memory, 64MS/s, 16100 samples), which shows the first 8 rounds, which is more than enough for this attack:

We perform a correlation attack, by choosing the 6 bits of key material XOR’ed with the permuted plaintext, and then correlating against the expected Hamming weight of sbox[out] & 0x0F OR sbox[out] >> 4. If our key guess is correct, we should get an extremely strong correlation.

In order to do this, we need to implement:

  • the first part of the DES algorithm up to the first round, allowing us to permute the plaintext via the initial permute (ip_permtab) and expand permute (e_permtab)
  • the “guess” part of the DES algorithm: for each guessed key fragment, we must be able to compute the SBOX output (keeping in mind the 8 separate SBoxes, depending on the key fragment used.

A bit of quick testing shows us the leak actually works, showing a strong correlation against a single key fragment:

We then complete the rest of the attack, including the multiple SBox mechanic, leaving us with the following overview:

Strangely, only 4 of the 8 key guesses were recovered. I slowly walked through my code, until I realized that I had only guessed key bits up to 48, instead of up to 64 (2 ^ 6) as it should have been. one quick fix later:

Key Recombination and Recovery

To recover the key fragments into a useful key, we need to perform the following steps, in order:

  • Invert the PC2 permutation (which takes 48 bits of input, and generates 56 bits of output, including 8 bits which we know are useful, but don’t know the value of)
  • Invert the shiftkeys operation (which takes 56 bits of input and gives 56 bits of output)
  • Invert the PC1 permutation (which takes 56 bits of input, and generates 64 bits of output, including 8 bits which we know are ignored)

Unfortunately, the PC2 permutation applied to the key only keeps 48 / 56 bits of usable key material – we can use a brute force attack to recover the final 8 bits of useful key material.

To allow this, I used a flexible inverse permutation, which allowed marking bits as “don’t know but care (2)” and “don’t know, don’t care (3)”:

def inv_permute(table,blk,default_char=2):
  pt = [default_char] * (max(table) + 1)
  for index in range(0,len(blk)):
    if pt[table[index]] == 2 or pt[table[index]] == 3:
      pt[table[index]] = int(blk[index])
    else:
      if pt[table[index]] != int(blk[index]):
        print "fail - mismatch in inv_permute"
        sys.exit(0)
  return pt

This ended up providing a template like the following (based off the recovered fragments above):

[0, 0, 1, 0, 1, 2, 2, 3, 0, 1, 2, 2, 1, 1, 1, 3, 0, 0, 0, 1, 0, 1,
 0, 3, 0, 0, 0, 1, 0, 1, 1, 3, 0, 0, 1, 0, 1, 0, 0, 3, 1, 0, 2, 0,
 1, 2, 1, 3, 1, 2, 0, 2, 0, 0, 1, 3, 1, 0, 1, 0, 0, 1, 1, 3]

We can then brute force a single byte of key material, substituting it into the bits marked ‘2’, and replacing the ‘3”s with 0’s. Testing this against a single known plaintext and ciphertext is enough to recover an equivalent key, which works just as well as the original key:

Original:   2b7e151628aed2a6
Equivalent: 2a7e141628aed2a6

Alternatively, using the second and third rounds of DES to recover this material is also feasible, but tremendously time-consuming in terms of code.

All the code is in github.com/CreateRemoteThread/fuckshitfuck – most of the DES-specific code is in dessupport.py.

Posted in Bards, Computers, Jesting | Leave a comment

Magnetic Correlation Analysis of AES

Over the past week, I have attempted to replicate my power analysis work on AES using a magnetic field / H-Field probe. Unfortunately, there is little literature on the specifics of how to do this, but I was able to do this on the ATmega target used for the original correlation analysis to achieve an extremely strong result, using only a few captures (1000):

The key to this was to maximize the signal to noise ratio of the incoming capture via probe positioning. To do this, I manually sampled the magnetic field emission while the device was on and off (not doing anything active – just powered on). Here is a sample of the magnetic field measured through an H-field probe, while the device is powered off (the time scale is ms I think, but it doesn’t matter, we only look at the average):

Contrast this to a sample of the magnetic field, while the device is on and awaiting input:

In practical terms, this was possible with the H-field probe’s “tip” approximately 25% from the top of the ATmega328p target, as follows:

I also massively oversampled, based on commentary from the NewAE forum. In this thread, there is mention of needing to phase shift the ADC of the ChipWhisperer when doing this attack. Given that I wasn’t providing the clock signal directly, I concluded that phase shifting was not applicable, and therefore, I set the sample rate to 128MS for a 16Mhz target, hoping to get enough samples it didn’t matter.

Using this setup, I was able to clearly capture the rounds of AES such that they were visually distinguishable:

(This was a pleasant surprise, given the comment around the waveform being less nice than a shunt resistor here, I was emotionally prepared to go the distance with maths alone).

Correlating these by the hamming weight of the first round post-sbox value, I was able to recover some bytes of the key, corresponding to the peaks demonstrated above. The entire key is not recovered but the success is clear: the rest is just better selection of the first round of AES… and maybe a nicer plotting tool to do this.

Perhaps the greatest success is that no new code needed to be written for this attack – everything is still at github/CreateRemoteThread/fuckshitfuck. Hooray!

May your weekend be ruthlessly productive.

Posted in Bards, Computers, Jesting | Leave a comment

Some thoughts on motivation…

It’s been a while since my last quality shitpost, and it is time I shitposted again. The past year has been a time of growth for me, both in terms of pushing my technical boundaries and in terms of professional character resilience. I’d like to share my thoughts, both to help me gather them, and just incase someone might find them useful.

Without further ado…

Altruism Is Terrible

Over the past year, the Platypus Initiative has been in a state of steady decline. Activity is near-zero, and while there are countless reasons for this, I think the biggest is that we began to switch from wanting to do things for ourselves, to wanting to do things for what we thought was “altruism”.

The old adage, “you can lead a horse to water but you can’t make it drink”, holds true: some people are content-creators, but the overwhelming majority are not – and that’s fine. I recall with a certain fondness the times when a few of us would get together and CTF in the dark – when I thought we communally reached for the metaphorical stars and pushed our own technical limits.

While this was fantastic at the time, this is over – this isn’t remotely sustainable, and our time is better spent doing things we’re genuinely interested in.

That’s not to say there’s no room for collaboration – I spent last Saturday sitting together with someone working through some power analysis work – there’s plenty, but it must be organic for it to be truly meaningful, and in terms of holding events, for it to be something more than an excuse to get together and drink.

Row, Row, Row Your Boat

In a corporate environment, it is easy enough to “give up”, rest on your laurels and pretend to know about security, lording over the rest of the organisation with arcane proclamations no-one dares challenge. Whether it is in my day-to-day work or at general infosec social get-togethers, I see this everywhere.

This is base betrayal, of both professional ethics and general sensibilities.

Still entire industries have sprouted up among this concept, and it is not my place to be offended at their chosen livelihood (of fraudster), and more than I can be offended by the respected fisherman or master blacksmith.

Instead, our time is better spent focusing inwards, and taking the opportunity to make the best of our own situation, because if we don’t, no-one else will. In line with the “Altruism Is Terrible” concept, it’s great if we can make our own lives better and do the same for others, but charity starts at home and we should look after ourselves first – you can’t help anyone if you’re not motivated yourself.

To this point, here is (part of – I’ve now moved the oscilloscope to the left side of my laptop to improve my feng shui) my desk at work now, a fortress of quality posting among a tide of business excellence.

Great success.

For anyone thinking of working with my employer, know that I am the exception to the rule, though I hope that one blessed day, more people will be able to pursue interesting technical content as part of their day job, and I will work towards this.

Nothing Is Beyond Our Reach

The most difficult part of the last year has been to remain consistently motivated, and in this, I haven’t been 100% successful. There’s been days where I’ve just given up and played videogames – and this is ultimately fine. LiveOverflow deals with this quite well:

Still, I am not LiveOverflow.

I’ve found the trickiest part is to get started – once I open IDA and load that massive memory blob, or heat my soldering iron, things are easy form there – but some days, it is particularly difficult to begin. I’ve found the trick is twofold:

  • Surround yourself with people that value actions over words. If you don’t know any, then email the authors of good papers and ask them questions, and get that conversation started.
  • Keep a diary with a to-do list, and don’t go to sleep until you’ve updated it. Even if all you do is shift everything to tomorrow’s to-do list, at least make the effort to recognize you’ve done nothing, and tomorrow, seek improvement.

If you can combine the above two somehow – perhaps by finding a group of similarly minded people, all the better.

Remember Your Roots

A year ago, I could barely hardware – but now, I have some experiencing tampering with embedded systems, and know enough to Google what I don’t know. Two years ago, I was stumbling my way through CTF challenges – now, after deliberate practice, I know enough to meaningfully contribute and can comfortably solve things on my own.

Still, it is difficult to look at the achievements of others and not feel a sense of despair that we’re not there. To this, I refer to Ange Albertini’s talk at hack.lu:

It is important to keep in mind that at the end of the day, there’s always someone smarter out there, and someone who knows things we will never practically come across. It’s almost a matter of prioritization – of picking the things you want to be good at, and throwing your heart and soul into it, and doing the bare minimum for everything else.

Surprisingly, the most difficult part of this is to let go of things (and people) that don’t positively contribute to where you want to be: it is far better to strive for excellence alone, than to accept mediocrity with company.

See you in TJCTF (again).

Posted in Bards, Jesting | Leave a comment

Differential Power Analysis vs AES

This post is a follow-up to my last post regarding correlation power analysis (CPA). A second technique, differential power analysis (DPA), can also be used to analyze power traces to extract information.

The specific attack I will illustrate below is the “difference of means” attack. The code is again available at https://github.com/CreateRemoteThread/fuckshitfuck.

Distinguishing Function

Unlike CPA, DPA relies on a mechanism called the distinguishing function. Put simply, this is a true-false hypothesis, used to separate power traces into two groups (i.e. the two groups are somehow distinguishable by power). For example, a distinguishing function might be

“during the first round of AES, the last bit of the s-box output / intermediate value is 1. at some time-point in the first round of AES. this causes a different amount of power to be consumed than if the value is 0.”

Now, using the power traces we have gathered, and knowing the plaintexts, we run the distinguishing function to separate power traces into two “buckets” for each given possible key[0].

Difference of Means

We then take the mean trace of each bucket, and subtract them from one another (and make all points of this “difference of means” trace an absolute value). Let’s pretend we’ve done this for all possible key[0] values across 10000 traces. At this point, there are two outcomes:

  • If our hypothesis for key[0] is correct, there will be X traces in Group 1 where the distinguishing function is true, and Y traces in Group 2 where the distinguishing function is false.
    • The mean of Group 1 will, at a time-point, consume more or less power than the mean of Group 2, representing the power used to move the final bit of the intermediate value into memory / register / etc.
    • The “difference of means” will have a spike, representing the above time point.
  • If our hypothesis for key[0] is incorrect, the distinguishing function is effectively a random sorting function.
    • The mean of Group 1 will be similar to the mean of Group 2
    • The “difference of means” will not have a distinguishable spike.

We can then easily plot the greatest difference of means for a given distinguishing function, for each key hypothesis:

(The label on the Y axis is incorrect, it should be the “Maximum Difference of Means”). Each line in the plot represents a given byte position in the key – the x-axis represents the actual key guess from 0 to 255, and the y hypothesis represents the maximum point in the difference of means, when the distinguishing function is run for that key hypothesis.

It is clear that both of these attacks are incredibly powerful, and can extract information from minute differences in power consumption through statistical trickery. I look forward to exploring more of these attacks and related attack scenarios.

Posted in Bards, Computers, Jesting | Leave a comment