Testing The VM

March 22nd, 2008

Since Imri and I started working on this huge project, we finally got to a situation where we got a few layers ready: disassembler to structure output –> structure to expression tress –> VM to run those expressions. This is all for the x86 so far, but the code is supposed to be generic, that’s why there’s the expression trees infrastructure from the beginning, so you can translate any machine code to this language and then the same VM will be able to run it. Sounds familiar in a way? Java, Ahm Ahm…

Imri has described some of our work in his last blog post, where you can find more information on what’s going on. But now that we got the VM somewhat working, my next step is to check all layers, that is, running a piece of code (currently assembly based) and see that I get the same results after running it for real. So after getting the idea of inline-assembly in Python, yes yes :) you heard me right. However, thanks to a friend who hasn’t published it yet. I changed the idea so instead of inline assembly I have a function that takes a chunk of assembly text and runs it both on the VM and natively under Python. Then when both are done executing, I check the result in EAX and see if they match… EAX can hold anything, like the EFLAGS and then I can even see if the flags of an operation like INC were calculated well in my VM…

The way I run the code in Python natively is by using ctypes. I wrote a wrapper for YASM and now I can compile a few assembly lines and then the decomposer (disassembler to expressions layer) is fed with the output of machine code, which is being ran in the VM. Then I take ctypes to run a raw buffer of code, using WINFUNCTYPE and the address of the buffer of the binary code, which I can then execute as simple as calling that instance. And I run it natively inside Python’s own thread’s context. The downsides are that I’m capped to 32 bits (since I’m on 32 bits environment), where x86 can be 16, 32 or 64 bits. Thus, I can’t really check 16 and 64 bits at the moment. More disadvantage are that I can’t write anywhere in memory for the sake of it, I need to allocate a buffer and feed my code with the pointer to it, unlike in the VM where I can read and write from anywhere I wish (it doesn’t imitate a PC). And I better not raise any exceptions whether intantionaly or not, because I will blow up my own Python’s process. ;) So I have to be a good boy and do simple tests like: mov al, 0x7f; inc al. and then see if the result is 0x80 and whether OF is set for example. This is really amazing to see the unit test function returns true on such a function :) So on the way we find bugs in the upper layers that sit above diStorm (which is quite stable on its own) and fixing them immediately and then retrying the unit tests. While I add more unit tests and fix more things. Eventually the goal is to take an inline C compiler and run the code and see the result. I’m interested in checking specific instructions at this phase of the project to know that everything we are based on works great, and then we can go with bigger blocks of code, doing fibunaci and other stuff in C…

I am a bit annoyed about the way that I run the code natively. I mean, I trust the code since I write it myself and there are only simple tests. But I can screw up the whole process by dividing by zero for example. I have to run the tests on an x86 machine with 32 bits environment, so I can’t really check different modes. The good thing is that I can really use the stack, as long as I leave it balanced when I end the test (like recovering all regs, etc)… Though I wish I could have run the tests in a real v86 processor or something like that, it won’t be portable with Linux, etc… Even spawning another process and inject the code inside will require some code for Windows and for Linux. And even then if the code is bogus, I can’t really control everything, it will require more work per OS as well. So for now the infrastructure is pretty cool and it gives me what I need. But if you have any idea of how to better doing it, let me know.

Converting A Floating Point Number To A String – Explained

March 16th, 2008

I was always curios about how to convert a floating point number into a decimal ASCII string. The equation is trivial after you get to implement it. But the idea behind took me a long while to come up with. For the sake of example I will deal with a single precision floating point of IEEE 754. However, it doesn’t really matter to me how many bits I have to convert into a string, the algorithm stays the same. Floating Point numbers have all this crap about NaNs and QaNs and other weird stuff that I didn’t care about. They are just technical stuff that you have to implement, nothing really hard about it. I am just gonna focus on the way to convert the mantissa (that’s the part of the fraction of the real number) into decimal. I think it’s rather easy to take a look at the implementation of printf in libc of Linux. That way you don’t have a challenge with coming with this, eventually, simple algo on your own.

To simplify matters, I don’t handle big exponents, again it was not my focus. If you wish to change this code to something really usable, it’s possible with some work. Here I am not going to cover the format of the IEEE 754 floating point. I presume you know it, and if not, give it a look here.

void pf(unsigned long x)
{
 unsigned int sign = x >> 31;
 unsigned int exp = ((x >> 23) & 0xff) - 127;
 unsigned int man = x & ((1 << 23) - 1);
 man |= 1 << 23;
 if (sign) printf("-");
 printf("%d", man >> (23 - exp));
 printf(".");
 unsigned int frac = man & ((1 << (23-exp)) - 1);
 unsigned int base = 1 << (23 - exp);
 int c = 0;
 while (frac != 0 && c++ < 6)
 {
  frac *= 10;
  printf("%d", (frac / base));
  frac %= base;
 }
 printf("\n");}
}

You can see in the beginning that we extract the sign bit, exponent and mantissa from the raw number we got as a parameter. Then we fix the exponent since it is biased, though it’s a really nice trick, I won’t cover it today. And we set bit 24 in the mantissa, because we assume we got a normalized floating point number where bit 24th in the data is implied and not saved to spare this extra bit… Printing the sign is self explantory. And then we print the integer part of the mantissa, while taking care of the exponent. This is where a too big or too small exponent will screw up things. But for small integers everything is fine just yet.

Now the fun begins, converting the mantissa from base 2 to base 10. Yipi kay hey. Let’s see. We store the fraction part of the floating point number and the base. For now let’s ignore this base variable and we will get back to it later. We limit the loop to print 6 digits or less when there is no more what to print. The algorithm here is to ‘pull’ the digits over to be in the integer part of the number and print that part which is straight forward (though it’s only a digit and not a whole integer). Then we see how much more digits we have left to print and repeat the body of the loop until we’re done or printed enough digits.

I will try to explain this loop again this way: Let’s say you want to print the number 5/6 in decimal. How will you do that? That’s the key algorithm for converting the number to decimal as well in our case. I was told by a friend that children are taught this method in elemtary school, maybe I lacked of that class or we didn’t have it. :) Then what we are doing is: multiplying the 5 by 10 then see how many times it gets in the result: 50/6 = 8, then we do a modulu with 6, getting 50%6=2. And starting again, 2*10=20; 20 / 6 = 3; 20%6=2; 2*10=20; 20/6=3 and over and over again, the final result is 0.8333… This is similar to 1/3, try this algo on 1/4 and you will see that you get 2 and then 5 and then you reach 0 and stop, resulting in 0.25.

Back to the code above, we take the fraction part and we have a base, take a long breath – of 2 powered by the number of bits the fraction use to be stored. Why is that the base?

Like when you convert an integer to decimal, you have to multiply every set bit by 2**index, where the index is the index of the set bit in the integer stream. For example: 111b is 1*2**0 + 1*2**1 + 1*2**2 = 1 + 2 + 4 = 7. The same goes for bits that are on the right side of the point (thus fractions): .11b is 1*2**(-1) + 1*2**(-2) = 1/2 + 1/4 = 0.75. Notice that the indices this time are negative, and power of negative number is division, therefore: 1/base**(-index). Now instead of doing this conversion per bit, we can do it simply by dividing the fraction once, in our example: 11b by 4. (base powered by number of bits, 2**2=4). Now we treat the fraction as an integer and divide it by the new calculated base; we get 3/4 = 0.75. We saw how the conversion can be done easily and we need a way to print the result of such division… And now we’re back to the description above of how to print a simple fraction. This time we know the reasons behind the fraction and the base. Note that the base is bigger than the fraction by nature, otherwise we won’t have a correct input for the algo and then it won’t work well.

How to use the above code:
float x = 1337.99;
pf(*(unsigned long*)&x);
printf(“%f”, x);

Running the code using a different number every time, you will see that printf %f actually rounds the number where we print it as a raw floating number, without touching it. Therefore, next time I will talk about rounding the number.

A Common Bug With LEA

March 7th, 2008

If we examine the LEA instruction from text output point of view, we can say that it is similar to the MOV instruction, but a one that uses memoy indirection. Hence you got the square brackets to denote a memory address… Only when you interpret LEA you should know to remove the fake memory indirection and to treat the ‘address’ as an immediate value, or as an expression. The expressions can get as complex  as eax*8+ebx+1234. In reality, in order to simplify matters – the second (source) operand’s type of LEA will be probably OP_TYPE_MEM. Just like any other instruction that might have only a memory indirection, for instance, cmpxchg… So why shouldn’t we (disassemblers) have a special operand type for LEA? Well, mainly to say because it’s a headache to maintain two types that parse the memory indirection bytes, and eventually they really output the same stuff, so why not having only one type which LEA will be able to piggy back?

In diStorm, I used OP_TYPE_MEM for LEA’s operand. I didn’t want to have another special OP_TYPE_LEA. And I didn’t have any problem with it. Until today, I thought of a ‘bug’. You can use a segment override to change the default segment an address will be read from or written to. Here MOV EAX, [FS:EAX]. Normally the segment overrides you will see are only GS and FS, since DS and ES are usually set to the same value, so you can copy data from a source to dest bufferw using the MOVS instruction. Or you can compare two buffers using CMPS… And of course you cannot run without CS and SS… For that reason in 64bits they got rid off CS, DS, ES and SS segment overrides.

So what’s the bug you’re asking? LEA EAX, [FS:EAX] – Doesn’t really make any sense ah? And why does it happen? Because I use the MEM type… So I had two options, either add a special type so you filter the segment overrides for LEA. Or just filter the segment overrides when you see the instruction you decode is LEA. For simplicity I implemented the latter.

Anyway, diStorm is already updated by now. The bug was found in other disassemblers as well, to name Olly and VS’s debugger. I didn’t try other disassemblers, I guess more have this issue. Not to my surprise, IDA is immune.

The moral of the story? hmmm, don’t be a lazy jackass? No. Just don’t assume too much and try to think of the small details. :)

UPDATE:

Try to feed the assemblers with that buggy instruction and you will see that they generate it errornously with the segment override prefix :)

Fooling Around With LEA

March 5th, 2008

Yesterday it hit me. I just realized something so funny that I had to post it right here. I have been using LEA for years now and so have you I guess. Most of the times LEA is used to load an offset of a local variable in a function, for example:

void f()
{
int x;
g(&x);
}

The parameter &x for calling g will use LEA to load the address of x and pass it to g so g can change it inside. But this is nothing new.

You can write something like this:
LEA EAX, [0x12345678]
And you know what?
EAX will be now 0x12345678
This is somewhat trivial when you get to think about it, but when do you??
I wonder how good it is as anti-disassemblers stuff, I think it will get the disassembler a bit crazy, it worth a test… because now instead of loading immediates with MOV, you can use LEA…

Basic Ops #4 – Division

March 1st, 2008

I wasn’t sure whether to post this one or not. More over, my dilemma was whether to document this one or not. I decided that it’s so hard to explain what’s going on, that I will only dump my code here. The interface is ugly, because you cannot divide by zero…

// return success, *ret = a / b
int div(unsigned short a, unsigned short b, unsigned short *ret)
{
 if (b == 0) return 0;
 *ret = 0;
 unsigned short mask = 0, len = 0;

 // calc the mask of b.
 for (int i = 15; i >= 0; i–) {
  if (b & (1 << i)) {
  mask = (1 << i) – 1;
  len = 16 – i;
  break;
  }
 }
 // complement mask.
 mask = ~mask;

 // left justify b.
 while ((~b) & (1 << 15)) b <<= 1;

 while (len–) {
  *ret <<= 1;
  if ((a & mask) >= b) {
   *ret |= 1;
   a -= b;
  }
  b >>= 1;
  mask >>= 1;
 }

 return 1;
}

And yes it’s 16 bits division, it will apply for 32 or 64 bits as well. It really was one of the toughest things to write from scratch. Division is not that trivial like subtraction or multiplication. The trick eventually was to use the fact that we’re dealing with binary division..oh well, I quit. Get someone else to describe this one. Although I think there are many ways to implement this one. I remember a friend telling me he didn’t expect this way of solution. I don’t think, however, that this technique may be used for long division for bit array or really big numbers, maybe there’s something faster.

By the way, this is an unsigned division. As far as I know, and I might be wrong, you can’t do signed division without using unsigned division. And to do signed division you just remember the sign of a and b and then make them absolute and then you change the result accordingly. Anyways, if I’m wrong with my assumption, let me know otherwise.

Embedded Python And raw_input

February 25th, 2008

If you ever messed with embedded Python you would know that you can’t use the Win32 Console so easily. For that you will have to hack around a few bits in order to bind the stdin and stdout with the Win32 ones. Another familiar technique is to override sys.stdout with your own class that implements the ‘write’  method. So every time something goes to output this method is really called with an argument which is then being printed on the screen with whatever Win32 API you want, to name WriteFile

This same technique works also for stderr which is used when one writes some invalid text which doesn’t compile and an exception is thrown.

‘Hooking’ both stdout and stderr we still got stdin out of the loop. Now all these years that embedded Python was around I have never encountered the problem that the stdin doesn’t work with raw_input(). Don’t be confused, the way things work in embedded Python, it doesn’t mean that the interactive interpreter is fed from stdin, it’s a bit more messy. In all examples of embedded Python you have to implement your own interactive interpreter on your own, therefore you bypass stdin from the first moment. Hence if you never used raw_input you will not know if it’s broken or not. So I want to thank a special friend for reporting this bug to me… ;) J

The fix… To fix the problem I tried to replace sys.stdin with my own implementation of a class with a ‘read’ method (by guessing-because ‘write’ is stdout’s). Doing so I found out that the method name should be ‘readline’ according to raw_input’s shouts at me. Then I renamed the method and the implementation was to call a function that I wrote in C and which is wrapped in Python that reads input from the console (Using ReadFile(GetStdHandle(STD_INPUT_HANDLE),…) and returns that input string rawly. That did the trick and now raw_input works again.

I am pretty sure that all embedded Python users out there will need this hack if they already hack stdout…

Converting An Integer To Decimal – Assembly Style

February 13th, 2008

I know this is one of the most trivial things to implement in a high level language. In C it goes like this:

void print_integer(unsigned int x)
{
if ((x / 10) != 0) pi(x / 10);
putch((x % 10) + ‘0’);
}

The annoying thing is that you have to div twice and do a modulo once. Which in reality can be merged into a single X86 instruction. Another thing is that if you want to be able to print a normal result for an input of 0 you will have to test the result of the division instead of checking simply x itself. The conversion is done from the least significant digit to the most. But when we display the result (or put it in a buffer) we have to reverse it. Therefore the recursion is so handy here. This is my go in 16bit, it’s a code that I wrote a few years ago, and I just decided I should put it here for a reference. I have to admit that I happened to use this same code for also 32bits or even different processors and since it’s so elegant it works so well and easy to port. But I leave it for you to judge ;)

bits 16
mov ax, 12345
call print_integer
xor ax, ax
call print_integer
ret

print_integer:
; base 10
push byte 10
pop bx
.next:; make a 32 bits division, remainder in dx, quotient in ax
xor dx, dx
div bx
push dx ; push remainder
or ax, ax ; if the result if 0, we will stop recursing
jz .stop
call .next ; now this is the coolest twist ever, the IP that is pushed onto the stack…
.stop:
pop ax ; get the remainder (in reversed order)
add al, 0x30 ; convert it to a character
int 0x29 ; use (what used to be an undocumented) interrupt to print al
ret ; go back to ‘stop’ and read the next digit…

I urge you to compile the C code with full optimization and compare the codes for yourself.

Crunching 3 More Bytes

February 12th, 2008

I already uploaded the new version, whose size is 213 bytes. Download here.

The trick I used this time is very not popular and I doubt if any of you know it at all. A friend showed me this trick a year a go and it only occurred to me yester night to use it in Tiny PE itself.

C:\Documents and Settings\arkon>ping ragestorm.net
Pinging ragestorm.net [63.247.129.107] with 32 bytes of data:
Ctrl^c
python
>>> 107+129*256+247*256**2+63*256**3
1073185131
>>> len(str(_))
10
>>> len(“ragestorm.net”)
13
>>>

I guess you still didn’t get it :) So check this link out:
http://1073185131
The IP is converted to a decimal integer which the Net API knows and parses it well…
Whooooo’s the biatch now?! http://0x3ff7816b/ also works… but it’s still 10 bytes. heh

 BTW – I updated diSlib64 to be able to parse Tiny PE style files well…It’s the only thing that parses them from all the PE tools I got. Though it still doesn’t parse the forwarded export table. Will fix that in next update ;)

Nop nop nop

February 9th, 2008

Two posts ago I talked about NOP in 64bits. It was unclear whether 0x90 acts as a true NOP. Because if you really think about it, the way 64bits processors work when it executes 32bits operations is to do the exchange between EAX and EAX, and then zero the high dword of touched registered. This is why XOR EAX, EAX is similar to XOR RAX, RAX… Only in 64bits!

So in that posts some guy (or girl?) wrote in a comment that exchanging two registers using 0x90 is possible when you use a REX prefix. REX prefix is like the 0x66 of the new 64bits code. It lets you access more registers and indicate the instruction may run with operation size of 64 bits rather than 32 bits (which is the default, hence 0x90 in 64bits is supposedly xchg eax, eax).

The documentation is not readable in shit. But that might be cause I’m the shock here… But let’s put it this way, after I wrote a disassembler and read the docs so many times regarding instructions etc – then if I don’t understand it, who should? FFS

Talking with Stefan And Peter (the guy behind YASM) we got the issue cleared on both processors (Intel/AMD). I just want to state before that Stefan came up with the whole problem and then Peter did some tests too on his end. So thank you both :) we now know that XCHG R8, RAX can be decoded in two ways, but we will focus on the one with the byte code of 0x90. The byte code for that instruction is: 0x49 0x90. 49 for Width (64bits operand size) and Base (access R8) and 0x90 for the XCHG. Together it really works! Same as that 0x41 0x90 works as XCHG R8D, EAX (clearing high dword though…).

diStorm treated this errornously by giving an output of:
DB 0x49
NOP

to indicate that the 0x49 prefix wasn’t used it was (what I call) DB’ed. So I had to change the behavior of this one and it wasn’t so trivial because 0x90 can’t just say “Hey, I’m NOP from now on and always was”. Now it’s up to the prefixes to decide whether 0x90 is XCHG or NOP – runtime detection. The static DB of instructions can’t help it. In addition, don’t forget 0xf3 0x90 is PAUSE which is a completely different instruction for the sake of example when mentioning prefixes of 0x90…

Last Version Hopefully

February 9th, 2008

of Tiny PE that is. 216 bytes – over here.

 Some juicy details of the new tricks:

1. I had a gap of 4 bytes between two code chunks. 4 bytes is an immediate of 32 bits. Instead of jumping over this 4 bytes (which is a pointer to import data directory), I just skip over them using something stupid like MOV EDX, <DWORD>. Where <DWORD> is a place holder for the pointer. This way I destroy the value in EDX, but I chose EDX because I don’t use it anyway. I end up sparing a byte this way because jmp is 2 bytes long, and opcode for the mov is 1 byte.

2. Another problem was that the opcode of the mov was the same data as part of the export data directory size which poses a limitation on the values I could put there. Make long story short – I changed the base address of my image and adding size field of the directory size (the byte code for the mov is the high byte in the size DWORD field)  with the new base address there is no overflow and the exports continue to work well…

3. Immediately following the ‘Number Of Sections’ field I had the decryption loop code. In the beginning I used XOR for simplicity but now it’s changed to ADD R/M8, REG8. This is because this field is two bytes (a WORD) with the value of 1, means the image contains only one section and the second high byte is 0. The ADD byte code is 0. Do you make the math? So now I reuse this 0 byte to be a code for my decryption loop. Cool as it is, I had to change the code which encrypts the data in the image. XORing is really easy to do, but now my instruction looks something like this: add byte [ebx+ecx+off], bl. Mind you I know the value in BL in static time. Therefore I had to change the algorithm. Let plain-text-byte be P and value-of-BL be B, we got the following: Cipher = (P+256-B) & 255 for each byte we want to encrypt. The trick here is the modulo base, since we work with byte units (and it seems they are eventually truncated to 8 bits), we have to consider the add with the modulo. Decryption is actually: P = (Cipher + B) % 256. Anyway this way I saved one byte and it was a good practice on modulo stuff… ;)

4. Another byte was shaved by changing the URL of the image I download and execute by changing the URL from:

http ://ragestorm.net/f.DLL
to
http ://ragestorm.net/kernel32.DLL

Naughty? I need the kernel32.DLL anyway. So this way I can remove the ‘f’ and put the “kernel32” instead. So yes, now you’re downloading a file called ‘kernel32.DLL’ from my site. Though it’s being saved as something else locally… Ahh BTW-this way the image should be Win2000 compliant, but I can’t check it out. Any volunteers?

I wanted to release the code a month ago, but since then I have changed the code so many times. I think this is my final version for now :grins:. So I will pretty the code a bit and document some stuff and release it.