Archive

Archive for the ‘General x86-64’ Category

Heap Tracking

December 23rd, 2015 No comments

This post will cover the topic of finding and inspecting differences in a process heap over time. It will cover two techniques: a non-invasive one that iterates and copies heap entries from a separate process, and an invasive one that uses dynamic binary instrumentation to track all heap writes. Heap tracking is useful if you want to monitor large scale changes in an application over time. For example, looking at the state of the heap and potentially what data structures were modified after pressing a button or performing some complex action.

Non-invasive Heap Diffing

The non-invasive technique relies on remotely reading every allocated heap block in a target process and copying the bytes to the inspecting process. Once this iteration is done, a snapshot of the heap will be created and can then be accurately diffed against another snapshot at a later point in time to see how the heap state changed. This traversal is accomplished with the HeapList32First/HeapList32Next and Heap32First/Heap32Next functions from the Toolhelp API. The traversal code is shown below:

const Heap EnumerateProcessHeap(const DWORD processId, const HANDLE processHandle)
{
    HANDLE snapshot = CreateToolhelp32Snapshot(TH32CS_SNAPHEAPLIST, processId);
    if (snapshot == INVALID_HANDLE_VALUE)
    {
        fprintf(stderr, "Could not create toolhelp snapshot. "
            "Error = 0x%X\n", GetLastError());
        exit(-1);
    }
 
    Heap processHeapInfo;
 
    (void)NtSuspendProcess(processHandle);
 
    size_t reserveSize = 4096;
    std::unique_ptr<unsigned char[]> heapBuffer(new unsigned char[reserveSize]);
 
    HEAPLIST32 heapList = { 0 };
    heapList.dwSize = sizeof(HEAPLIST32);
    if (Heap32ListFirst(snapshot, &heapList))
    {
        do
        {
            HEAPENTRY32 heapEntry = { 0 };
            heapEntry.dwSize = sizeof(HEAPENTRY32);
 
            if (Heap32First(&heapEntry, processId, heapList.th32HeapID))
            {
                do
                {
                    if (IsReadable(processHandle, heapEntry.dwAddress, heapEntry.dwSize))
                    {
                        ReadHeapData(processHandle, heapEntry.dwAddress, heapEntry.dwSize,
                            processHeapInfo, heapBuffer, reserveSize);
                    }
 
                    heapEntry.dwSize = sizeof(HEAPENTRY32);
                } while (Heap32Next(&heapEntry));
            }
 
            heapList.dwSize = sizeof(HEAPLIST32);
        } while (Heap32ListNext(snapshot, &heapList));
    }
 
    (void)NtResumeProcess(processHandle);
 
    (void)CloseHandle(snapshot);
 
    return processHeapInfo;
}

For every heap list and subsequent heap entry, the heap block is read and its byte contents stored in an address -> byte pair. The remote read is just a call around ReadProcessMemory

void ReadHeapData(const HANDLE processHandle, const DWORD_PTR heapAddress, const size_t size, Heap &heapInfo,
    std::unique_ptr<unsigned char[]> &heapBuffer, size_t &reserveSize)
{
    if (size > reserveSize)
    {
        heapBuffer = std::unique_ptr<unsigned char[]>(new unsigned char[size]);
        reserveSize = size;
    }
 
    SIZE_T bytesRead = 0;
    const BOOL success = ReadProcessMemory(processHandle, (LPCVOID)heapAddress, heapBuffer.get(), size, &bytesRead);
 
    if (success == 0)
    {
        fprintf(stderr, "Could not read process memory at 0x%p "
            "Error = 0x%X\n", (void *)heapAddress, GetLastError());
        return;
    }
    if (bytesRead != size)
    {
        fprintf(stderr, "Could not read process all memory at 0x%p "
            "Error = 0x%X\n", (void *)heapAddress, GetLastError());
        return;
    }
 
    for (size_t i = 0; i < size; ++i)
    {
        heapInfo.emplace_hint(std::end(heapInfo), std::make_pair((heapAddress + i), heapBuffer[i]));
    }
}

At this point a snapshot of the heap is created. A screenshot of an example run shows the address -> byte pairs below.heapdiff

The next part is to take another snapshot at a later point in time and begin diffing the heaps. Diffing the heaps involves three scenarios: when a heap entry at the same address has changed, when an entry was removed (in first snapshot but not in second), and when a new allocation was made (in second heap snapshot but not in first). The code is pretty straightforward and performs a search and compare in the first heap against the second heap.

const HeapDiff GetHeapDiffs(const Heap &firstHeap, Heap &secondHeap)
{
    HeapDiff heapDiff;
 
    for (auto &heapEntry : firstHeap)
    {
        auto &secondHeapEntry = std::find_if(std::begin(secondHeap), std::end(secondHeap),
            [&](const std::pair<DWORD_PTR, unsigned char> &entry) -> bool
        {
            return entry.first == heapEntry.first;
        });
 
        if (secondHeapEntry != std::end(secondHeap))
        {
            if (heapEntry.second != secondHeapEntry->second)
            {
                //Entries in both heaps but are different
                heapDiff.emplace_hint(std::end(heapDiff),
                    heapEntry.first, std::make_pair(heapEntry.second, secondHeapEntry->second));
            }
            secondHeap.erase(secondHeapEntry);
        }
        else
        {
            //Entries in first heap and not in second heap
            heapDiff.emplace_hint(std::end(heapDiff),
                heapEntry.first, std::make_pair(heapEntry.second, heapEntry.second));
        }
    }
 
    for (auto &newEntries : secondHeap)
    {
        //Entries in second heap and not in first heap
        heapDiff.emplace_hint(std::end(heapDiff),
            newEntries.first, std::make_pair(newEntries.second, newEntries.second));
    }
 
    return heapDiff;
}

A screenshot post-diff is shown below:

heapdiff1

Looking at the above example, you can see that the bytes at heap address 0x003F0200 changed from 0x2B to 0x57, among many others. The last step is to merge contiguous blocks to make things more simple. The code is omitted here, but a final screenshot is shown below showing the final structure of the heap diff.heapdiff2The diff can be inspected for anything deemed interesting and can aid in reverse engineering an application. For example, to see where text is drawn in a text editor, you can write some text in the editor and take a snapshot

heapdiff3Prior to taking a second snapshot, change some of the text around and inspect the heap differences. For this example, some AA‘s were changed to BB.heapdiff4The heap contents beginning at 0x0079201C contained the text and were noted as changing from A -> B. Attaching a debugger and setting a breakpoint on-write at 0x0079201C showed an access from 0x00402CA5, which is a rep movs instruction responsible for copying the text to draw into the buffer.heapdiff5heapdiff6
The usefulness of this technique is obviously predicated on the desired data to reside in the process heap.

Invasive Heap Diffing

The technique described above is useful because it does not disturb the process state, aside from suspending and resuming it. The inspecting process has no access to the address space of the target process and performs all of its actions remotely. This next technique uses Intel’s Pin dynamic binary instrumentation platform to instrument a target process and monitor only heap writes. This means that, unlike the previous technique, the entire state of the heap does not need to be tracked. Pin allows for tracking of memory writes in a process, among many other things. Pin is injected as a DLL into a process, so all code written within it will have access to the process address space. That means that instead of traversing heap lists and heap entries, the HeapWalk function can be used directly to get all valid heap addresses.

In the example, all current heap addresses are kept in a std::set container. These are retrieved when the DLL is loaded in the process and instrumentation beings:

void WalkHeaps(WinApi::HANDLE *heaps, const size_t size)
{
    using namespace WinApi;
 
    fprintf(stderr, "Walking %i heaps.\n", size);
 
    for(size_t i = 0; i < size; ++i)
    {
        if(HeapLock(heaps[i]) == FALSE)
        {
            fprintf(stderr, "Could not lock heap 0x%X"
                "Error = 0x%X\n", heaps[i], GetLastError());
            continue;
        }
 
        PROCESS_HEAP_ENTRY heapEntry = { 0 };
        heapEntry.lpData = NULL;
        while(HeapWalk(heaps[i], &heapEntry) != FALSE)
        {
            for(size_t j = 0; j < heapEntry.cbData; ++j)
            {
                heapAddresses.insert(std::end(heapAddresses),
                    (DWORD_PTR)heapEntry.lpData + j);
            }
        }
 
        fprintf(stderr, "HeapWalk finished with 0x%X\n", GetLastError());
 
        if(HeapUnlock(heaps[i]) == FALSE)
        {
            fprintf(stderr, "Could not unlock heap 0x%X"
                "Error = 0x%X\n", heaps[i], GetLastError());
        }
    }
 
    size_t numHeapAddresses = heapAddresses.size();
    fprintf(stderr, "Found %i (0x%X) heap addresses.\n",
        numHeapAddresses, numHeapAddresses);
 
}

An instrumentation function is then added, which is called on every instruction execution:

INS_AddInstrumentFunction(OnInstruction, 0);

The OnInstruction function checks to see if it is a memory write. If it is then a call to our inspection function is added and subsequently invoked. This function checks if the address that is being written to is in the heap and logs it if that is the case.

VOID OnMemoryWriteBefore(VOID *ip, VOID *addr)
{
    if(IsInHeap(addr))
    {
        fprintf(trace, "Heap entry 0x%p has been modified.\n", addr);
    }
}

Testing this is pretty simple; create a simple application that allocates some data on the heap and performs constant writes to it:

int main(int argc, char *argv[])
{
    int *heapData = new int;
    *heapData = 0;
 
    fprintf(stdout, "Heap address: 0x%p", heapData);
 
    while(true)
    {
        *heapData = (*heapData + 1) % INT_MAX;
        Sleep(500);
    }
 
    return 0;
}

Running the instrumentation against the a compiled version of the code above produces the following output, showing successful instrumentation and heap tracking.heapdiff9

Heap entry 0x007B4B58 has been modified.
Heap entry 0x007B4B6C has been modified.
Heap entry 0x007B4B70 has been modified.
Heap entry 0x007B4B68 has been modified.
Heap entry 0x007B27C8 has been modified.
Heap entry 0x007B27C8 has been modified.
Heap entry 0x007B27C8 has been modified.
Heap entry 0x007B27C8 has been modified.
Heap entry 0x007B27C8 has been modified.
...

The Pin framework provides a lot more functionality than what is covered in the example code provided. The code can further be expanded to disassemble and interpret the writing address and get the current heap value and the value that will be written as in the first example.

Final Notes

This post presented a couple of techniques for finding differences in process heaps. The example code shows basic examples, but has some issues in terms of scaling; a 100MB heap diff takes about 15 minutes with the current implementation due to the large number of lookups. The code should serve as a good starting point to build on if the target application allocates a large amount of heap space.

Code

The Visual Studio 2015 project for this example can be found here. The source code is viewable on Github here. Thanks for reading and follow on Twitter for more updates.

Runtime DirectX Hooking

December 14th, 2015 1 comment

This post will cover the topic of hooking DirectX in a running application. This post will cover DirectX9 specifically, but the general technique applies to any version. A previous and similar post covered virtual table hooking for DirectX10 and DirectX11 (with minor adjustments). Unlike the previous post, this one aims to establish a technique to hook running DirectX applications. This means that it can be installed at any time, unlike the previous technique, which required starting a process in a suspended state and then hooking to get the device pointer.

Motivations

The motivations are similar to the previous post. By hooking the DirectX device, we can inspect or change the properties of rendered scenes (i.e. depth testing, object colors), overlay text or images, better display visual information, or do anything else with the scene. However, to achieve anything beyond the basics, it also takes a lot of effort in reverse engineering the actual application; simply having access to the rendered scene won’t get you too far.maxresdefault

An example of DirectX hooking to make certain models have a bright color, and to allow seeing of depth through objects that obstruct a view.

SC2Console

An example of outputting reverse engineered data from a client and overlaying it as text in the application. This is a pretty awesome project whose description and source code is available here.
Techniques

Typically when hooking DirectX, there are several popular options:

  • Hook IDirect3D9::CreateDevice and store the IDirect3DDevice9 pointer that is initialized when the function returns successfully. This needs to be done when the process is started in a suspended state, otherwise the device will have already been initialized.
  • Perform a byte pattern scan in memory for the signature of IDirect3DDevice9::EndScene, or any other DirectX function.
  • Create a dummy IDirect3DDevice9 instance, read its virtual table, find the address of EndScene, and hook at the target site.
  • Look for the CD3DBase::EndScene symbol in d3d9.dll and get its address.

Each one has its drawbacks, but my personal preference is the last option. It’s the one that offers the greatest reliability for the least amount of overhead code. The code for it is pretty straightforward, with the help of the Windows debugging APIs:

const DWORD_PTR GetAddressFromSymbols()
{
    BOOL success = SymInitialize(GetCurrentProcess(), nullptr, true);
    if (!success)
    {
        fprintf(stderr, "Could not load symbols for process.\n");
        return 0;
    }
 
    SYMBOL_INFO symInfo = { 0 };
    symInfo.SizeOfStruct = sizeof(SYMBOL_INFO);
 
    success = SymFromName(GetCurrentProcess(), "d3d9!CD3DBase::EndScene", &symInfo);
    if (!success)
    {
        fprintf(stderr, "Could not get symbol address.\n");
        return 0;
    }
 
    return (DWORD_PTR)symInfo.Address;
}

Once the address is retrieved, it’s simply a matter of installing the hook and writing code in the new hook function. The Hekate engine was used for hook installation/removal, making the code simple:

const bool Hook(const DWORD_PTR address, const DWORD_PTR hookAddress)
{
    pHook = std::unique_ptr<Hekate::Hook::InlineHook>(new Hekate::Hook::InlineHook(address, hookAddress));
 
    if (!pHook->Install())
    {
        fprintf(stderr, "Could not hook address 0x%X -> 0x%X\n", address, hookAddress);
    }
 
    return pHook->IsHooked();
}

The EndScene function was chosen specifically due to how DirectX9 applications are developed. For those unfamiliar with DirectX, the flow of rendering a scene generally goes as follows: BeginScene -> Draw the scene -> EndScene -> Present. Other DirectX9 hook implementations hook Present instead of EndScene, it becomes a matter of preference unless the target application does something special. In the example application, some text is overlaid on top of the scene:

HRESULT WINAPI EndSceneHook(void *pDevicePtr)
{
    using pFncOriginalEndScene = HRESULT (WINAPI *)(void *pDevicePtr);
    pFncOriginalEndScene EndSceneTrampoline =
        (pFncOriginalEndScene)pHook->TrampolineAddress();
 
    IDirect3DDevice9 *pDevice = (IDirect3DDevice9 *)pDevicePtr;
    ID3DXFont *pFont = nullptr;
 
    HRESULT result = D3DXCreateFont(pDevice, 30, 0, FW_NORMAL, 1, false,
        DEFAULT_CHARSET, OUT_DEFAULT_PRECIS, ANTIALIASED_QUALITY,
        DEFAULT_PITCH | FF_DONTCARE, L"Consolas", &pFont);
    if (FAILED(result))
    {
        fprintf(stderr, "Could not create font. Error = 0x%X\n", result);
    }
    else
    {
        RECT rect = { 0 };
        (void)SetRect(&rect, 0, 0, 300, 100);
        int height = pFont->DrawText(nullptr, L"Hello, World!", -1, &rect,
            DT_LEFT | DT_NOCLIP, -1);
        if (height == 0)
        {
            fprintf(stderr, "Could not draw text.\n");
        }
        (void)pFont->Release();
    }
 
    return EndSceneTrampoline(pDevicePtr);
}

Building as a DLL and injecting into the running application should show the text overlay (below):

sampleimgdx9

Hekate supports clean unhooking, so unloading the DLL should remove the text and let the application continue undisturbed.

Code

The Visual Studio 2015 project for this example can be found here. The source code is viewable on Github here. The Hekate static library dependency is included in a separate download here and goes into the DirectXHook/lib folder. Capstone Engine is used as a runtime dependency, so capstone_x86.dll/capstone_x64.dll in DirectXHook/thirdparty/capstone/lib should be put in the same directory that the target application is running from.

Thanks for reading and follow on Twitter for more updates

Hekate: x86/x64 Winsock Inspection/Modification (Alpha dev release)

September 9th, 2015 2 comments

Introduction

This post will cover Hekate, a C++ library for interacting with Winsock traffic occurring in a remote process. The purpose of the library is to provide an easy to use interface that allows for inspection, filtering, and modification of any Winsock traffic entering or leaving a target process. Hekate aims to simplify targeted collection of data, aide in reverse engineering protocols, and potentially provide basic security auditing by letting developers fuzz, modify, or replay data being sent to their process.

What it is

Hekate comes provided as a set of components that come together to hook and exfiltrate Winsock data. The final build of the project is a DLL that is injected into the target process. In the project are some of the following:

  1. A generic thread-safe x86/x64 inline hooking engine powered by Capstone Engine, usable for any function hooking (not just Winsock)
  2. IPC based on named pipes to allow sending data to a remote listening process
  3. Winsock specific hooks responsible for matching parameters against filters and taking appropriate action
  4. Several example projects showing the hook, filter, and modify functionality provided by the library
  5. RAII wrappers around Capstone Engine and Windows API objects that automatically clean up upon the resources no longer being needed

These components are combined into the Hekate “app”, which is responsible for handling incoming commands that clients issue and sending captured data out to them.

Architecture

The injected Hekate.dll functions as a server that listens for a client connection to send data out to (once established). The protocols that are communicated between client and server are written to utilize Protocol Buffers and are available in the .proto files contained in the source code. There are eight Winsock functions that are currently being monitored: send/sendto/WSASend/WSASendTo/recv/recvfrom/WSARecv/WSARecvFrom as provided by the Winsock API. For each of these functions, there is a corresponding protobuf message that will copy the parameters, serialize the message, and send it out to a client.  These messages can be found in the HekateServerProto.proto file:

message SendMessage_
{
	required int64 socket = 1;
	required bytes buffer = 2;
	required int32 length = 3;
	required int32 flags = 4;
}
...
message WSARecvFromMessage
{
	required int64 socket = 1;
	repeated int64 buffers = 2;
	repeated int32 buffer_size = 3;
	required int32 count = 4;
	required int64 bytes_received_address = 5;
	required int64 flags_address = 6;
	required int64 from_address = 7;
	required int64 from_length_address = 8;
	required int64 overlapped_address = 9;
	required int64 overlapped_routine_address = 10;
}

A client is responsible for receiving and deserializing these messages. A client is also responsible for issuing commands to the server. At this current dev release, the following commands are supported:

  • Add/Remove a hook on a Winsock function
  • Add/Remove a filter for Winsock data.
  • Pause/Continue execution on filter hit
  • Replay captured data

The client is able to send these commands out immediately after connection to the server is established. The received commands will be processed synchronously to when they are received. The client protocol also provides a debug acknowledge flag that the server will echo back upon a successful receipt of the message (for testing). Additionally, there is copious logging provided throughout the code to notify developers of any potential errors that might have occurred at any stage of usage. The full client-side protocol definition can be found in the HekateClientProto.proto file.

Internals: The Receive & Dispatch Loop

On startup, Hekate initializes two named pipes: \\.\pipe\HekatePipeOutbound and \\.\pipe\HekatePipeInbound. Outgoing messages will be sent on HekatePipeOutbound, and incoming commands will be listened to on HekatePipeInbound. The server will wait for connections on both of these pipes and then spawn a new thread to listen for messages from the client. The incoming/outgoing message format is currently broken into two parts: the first message being a 4-byte size, with the second being the serialized protobuf message of that size. Upon receipt by the server, the message is deserialized and passed to a callback provided by the app. This callback is responsible for parsing and dispatching the message. The flow of incoming messages is IPCNamedPipe::RecvLoop -> HekateMain::RecvCallback -> IMessageHandler::Parse -> HekateMessageHandler::On{Command}Message -> HekateMain::{Command}.

A client talking to the Hekate server mimics this communication behavior closely. A client must open \\.\pipe\HekatePipeOutbound with generic read access and \\.\pipe\HekatePipeInbound with generic write access. Upon the pipe connection being established, the client is free to begin sending commands and listening for responses using the {size} -> {message} scheme described above.

Dynamically Add and Remove Hooks

As mentioned above, Hekate comes with a generic x86/x64 inline hooking engine. On startup, Hekate will locate the target Winsock functions mentioned earlier. Once these are located, the dynamic hooking process can be carried out. When installing a hook, Hekate will disassemble the target function in order to find the appropriate amount of space needed. These instructions will then be relocated to a newly allocated region of memory and have a jump back to the original function. A comprehensive example is shown below:

h0

The original bytes of a send function.

h1

The appropriate amount of space has been calculated for an x86 hook. The bytes have been replaced with a push <target address> -> ret style detour. Extra bytes are padded with int 3 (breakpoint) instructions.

h11The hook function at 0x30BA140 is now being invoked instead when send is being called.

h2

At the end of the hook, the hook will call the relocated bytes, located here at 0x620000. This contains the original bytes that were relocated and a jump back to immediately after the hook in the send function.

This same exact technique is performed for x64 code as well. When a hook is removed, these relocated bytes are written back to the address of the original function and the memory holding the relocated instructions is freed. In an attempt to ensure safe installation and removal, Hekate will suspend all threads (except its own), write the instructions to the process memory, flush the instruction cache, then resume the process threads. An important note is that no hooking takes place on startup or at any point without an explicit command from the client. If the Hekate server DLL is injected into a target, then all it will do is listen for connections; nothing in the original target process will be modified.

Internals: Adding a Hook

The Hekate client protocol describes an easy way to add/remove hooks. Clients simply need to send a message to the server with the name of the desired function to hook/unhook, i.e. “send“, “WSARecv“, and so on. These messages (and more) are in HekateClientProto.proto

message AddHookMessage
{
	required string name = 1;
}

message RemoveHookMessage
{
	required string name = 1;
}

The flow follows the read/dispatch loop until it reaches HekateMain::AddHook, which is responsible for installing the hook and reporting success/failure. The full flow of the code is HekateMain::AddHook -> HookEngine:Add -> HookBase::Install -> InlineHook::InstallImpl -> InlinePlatformHook::Hook -> followed by platform specific calls to InlinePlatformHook_x86::HookImpl/InlinePlatformHook_x64::HookImpl depending on the build. Removing a hook follows a similar path through the files, although obviously calling Unhook/Remove functions instead.

Add and Remove Filters

Hekate allows for filtering of incoming and outgoing Winsock data. Currently there are three supported filter types: byte, length, and substring. Byte filters match against byte(s) found at specific locations in the packet data. Length filters match against packet length for less than, equal to, or greater than a particular size. Lastly, substring filters match against a sequential series of bytes at any location in the packet. Filters are also what allows for manipulation of the data matched against them; you can substitute parts of a message or replace it altogether. Currently filters are matched in a queue: the first filter set will be the first one matched against, the second one will be the second, and so on. There are future plans to add priority to filters, but this dev release does not contain it. Filters also come with a “break-on-hit” flag that allows for the thread calling the target Winsock function to halt when the filter is hit. A client is responsible for sending a continue message to continue execution.

Internals: Adding a Filter and Matching

Adding a filter is initiated entirely on the client-side. The client specifies the match/substitute/replace parameters and forwards this information to the Hekate server, where a new filter of the appropriate type will be created and stored. As an example, below is some sample code that adds a filter and is found in the test filter project provided with the source code:

    auto firstFilter = Hekate::Protobuf::Message::HekateClientMessageBuilder::CreateSubstringFilterMessage(0x123,
        false, "first", 5);
    int replacementIndices1[] = { 12, 13, 14, 15, 16 };
    Hekate::Protobuf::Message::HekateClientMessageBuilder::AddSubstituteMessage(firstFilter, "QWERT", replacementIndices1, 5);
    WriteMessage(hPipeIn, firstFilter);

Here a substring filter with id 0x123 is created that looks for the substring “first”. It will substitute “QWERT” in the packet data at indices [12, 16] if the filter is matched. With this filter, a packet with data = “This is the first buffer” will be matched and replaced to read “This is the QWERT buffer“. The replacement indices do not need to match the indices of where the data was originally. Using replacement indices [0, 4] on the original messages will give “QWERTis the first buffer“.

On the server side, a queue of filters is kept. As mentioned, this queue is processed in the order that filters were created. Filters are initially created and added in HekateMain::AddFilter. For every hook function, i.e. WinsockHooks::SendHook, WinsockHooks::SendToHook, …, WinsockHooks::WSARecvFromHook, the buffer(s) is taken and matched against filters in the queue. This happens in WinsockHooks::CheckFilters, which calls the beginning of the filter chain and reports back whether any filter has been hit. Each filter returns a FilteredInput structure, which contains information about whether the filter was hit, whether there is new/modified data to send out, and the data bytes and length. If a filter was hit and data has changed as a result, then data from this FilteredInput structure is sent out; otherwise the original data will be sent.

Replay Data

Hekate also allows for the complete replaying of data of outgoing data. Parameters are re-sent exactly as they were: to the same socket, with the same buffer and lengths, same flags and any additionally WSA* parameters are provided exactly as received (i.e. same overlapped completion routine address). By design, filters are bypassed when replaying data. Replayed data calls the relocated code instructions and bypasses hitting any hooks/filters.

Dependencies

Hekate relies on Plog for internal logging and Google’s Protocol Buffers for the messaging format between client and server. The protobuf compiler is not provided as part of this release. The compiler source and release binary is available on the Protobuf Github page. The version used for Hekate was build 3.0.0 Alpha 3.

Building the DLL and Examples

Hekate is best built using Visual Studio 2015. Opening up the Hekate.sln file shows six projects

  1. Hekate
  2. HekateMITM
  3. HekateTestFilter
  4. HekateTestListener
  5. HekateTestSender
  6. libprotobuf

Hekate is the main project and contains the DLL that acts as the server. Before building Hekate, libprotobuf needs to be built. Build libprotobuf with a Debug/Release configuration for x86 and x64. These four configurations (Debug x86, Release x86, Debug x64, Release x64) should result in successful builds and there will be four .lib files in the /Hekate/thirdparty/protobuf/lib directory. Make sure that these four .lib files, libprotobufd_x86.lib, libprotobuf_x86.lib, libprotobufd_x64.lib, and libprotobuf_x64.lib are present in the directory as they are needed for the different build configurations. Once this is done, the Hekate project can be built. This project must be built with DebugDll/ReleaseDll configurations instead of Debug/Release. The latter two have been left in for the project in case developers want to mess around with an executable locally instead of building a DLL that needs to be injected. Using these two configurations should result in Hekate.dll being built in the DebugDll/ReleaseDll directories.

There are also four sample projects that serve as sample targets or clients for Hekate. HekateMITM is a sample client/server application that sends and receives data over localhost. One thread is responsible for sending data and the other for receiving. This sample project should be buildable immediately under x86/x64 and has no dependencies. It is intended as a target to test out functionality provided by other projects. HekateTestFilter and HekateTestListener are two sample Hekate clients. HekateTestFilter sets up three different filters, one corresponding to each type. It will substitute bytes in one message, replace bytes entirely in another, and pause execution for five seconds on a third message. A run of HekateMITM and HekateTestFilter is shown below. You can see the filters at work, where the first message type was modified and the second message type replaced entirely (click to enlarge).

c1

HekateTestListener is a passive listener client that will print out the value of the parameters passed into the Winsock functions along with the buffer. HekateTestSender is just a simple target application useful in that calls the eight Winsock functions in a loop useful for debugging/testing.

Code

The Visual Studio 2015 project for this example can be found here. The source code is viewable on Github here.
This code was tested on x64 Windows 7, 8.1, and 10.

Issues

I’ve aimed to have very comprehensive logging contained in the code. The log file is currently written out to C:/Temp/log.txt, and is a good starting point if an error has occurred at runtime. Hekate.dll also relies on Capstone, so capstone_x86.dll/capstone_x64.dll must be present in the same directory as the target.

License

Hekate is provided as-is and is released under the GNU General Public Licence (GPL) v3 for non-commercial use only.

The code base will continue to evolve and features will continue to be added. The content covered in this post might eventually become outdated as a result of this. I am aiming to have each major release/update act as a changelog from this main post. Future plans as far as this project goes is to eventually develop a nice UI wrapper around it that allows for easy interaction and visualization of data, filters, and other related aspects of what is happening to Winsock traffic on a target process. Thanks for reading and be sure to follow on Twitter for more updates.

Categories: General x86, General x86-64, Programming Tags:

Manually Enumerating Process Modules

August 20th, 2015 No comments

This post will show how to enumerate a processes loaded modules without the use of any direct Windows API call. It will rely on partially undocumented functionality both from the native API and the undocumented structures provided by them. The implementation discussed is actually reasonably close to how EnumProcessModules works.

Undocumented Functions and Structures

The main undocumented function that will be used here is NtQueryInformationProcess, which is a very general function that can return a large variety of information about a process depending on input parameters. It takes in a PROCINFOCLASS as its second parameter, which determines which type of information it is to return. The value of this parameter are largely undocumented, but a complete definition of it can be found here. The parameter of interest here will be ProcessBasicInformation, which fills out a PROCESS_BASIC_INFORMATION structure prior to returning. In code this looks like the following:

PROCESS_BASIC_INFORMATION procBasicInfo = { 0 };
ULONG ulRetLength = 0;
NTSTATUS ntStatus = NtQueryInformationProcess(hProcess,
    PROCESS_INFORMATION_CLASS_FULL::ProcessBasicInformation, &procBasicInfo,
    sizeof(PROCESS_BASIC_INFORMATION), &ulRetLength);
if (ntStatus != STATUS_SUCCESS)
{
    fprintf(stderr, "Could not get process information. Status = %X\n",
        ntStatus);
    exit(-1);
}

This structure, too, is largely undocumented. Its full definition can be found here. The field of interest is the second one, the pointer to the processes PEB. This is a very large structure that is mapped into every process and contains an enormous amount of information about the process. Among the vast amount of information contained within the PEB are the loaded modules lists. The Ldr member in the PEB is a pointer to a PEB_LDR_DATA structure which contains these three lists. These three lists contain the same modules, but ordered differently; either in load order, memory initialization order, or initialization order as their names describe. The list consists of LDR_DATA_TABLE_ENTRY entries that contain extended information about the loaded module.

Retrieving Module Information

The above definitions are all that is needed in order to implement manual module traversal. The general idea is the following:

  1. Open a handle to the target process and obtain the address of its PEB (via NtQuerySystemInformation).
  2. Read the PEB structure from the process (via ReadProcessMemory).
  3. Read the PEB_LDR_DATA from the PEB (via ReadProcessMemory).
  4. Store off the top node and begin traversing the doubly-linked list, reading each node (via ReadProcessMemory).

Writing it in C++ translates to the following:

void EnumerateProcessDlls(const HANDLE hProcess)
{
    PROCESS_BASIC_INFORMATION procBasicInfo = { 0 };
    ULONG ulRetLength = 0;
    NTSTATUS ntStatus = NtQueryInformationProcess(hProcess,
        PROCESS_INFORMATION_CLASS_FULL::ProcessBasicInformation, &procBasicInfo,
        sizeof(PROCESS_BASIC_INFORMATION), &ulRetLength);
    if (ntStatus != STATUS_SUCCESS)
    {
        fprintf(stderr, "Could not get process information. Status = %X\n",
            ntStatus);
        exit(-1);
    }
 
    PEB procPeb = { 0 };
    SIZE_T ulBytesRead = 0;
    bool bRet = BOOLIFY(ReadProcessMemory(hProcess, (LPCVOID)procBasicInfo.PebBaseAddress, &procPeb,
        sizeof(PEB), &ulBytesRead));
    if (!bRet)
    {
        fprintf(stderr, "Failed to read PEB from process. Error = %X\n",
            GetLastError());
        exit(-1);
    }
 
    PEB_LDR_DATA pebLdrData = { 0 };
    bRet = BOOLIFY(ReadProcessMemory(hProcess, (LPCVOID)procPeb.Ldr, &pebLdrData, sizeof(PEB_LDR_DATA),
        &ulBytesRead));
    if (!bRet)
    {
        fprintf(stderr, "Failed to read module list from process. Error = %X\n",
            GetLastError());
        exit(-1);
    }
 
    LIST_ENTRY *pLdrListHead = (LIST_ENTRY *)pebLdrData.InLoadOrderModuleList.Flink;
    LIST_ENTRY *pLdrCurrentNode = pebLdrData.InLoadOrderModuleList.Flink;
    do
    {
        LDR_DATA_TABLE_ENTRY lstEntry = { 0 };
        bRet = BOOLIFY(ReadProcessMemory(hProcess, (LPCVOID)pLdrCurrentNode, &lstEntry,
            sizeof(LDR_DATA_TABLE_ENTRY), &ulBytesRead));
        if (!bRet)
        {
            fprintf(stderr, "Could not read list entry from LDR list. Error = %X\n",
                GetLastError());
            exit(-1);
        }
 
        pLdrCurrentNode = lstEntry.InLoadOrderLinks.Flink;
 
        WCHAR strFullDllName[MAX_PATH] = { 0 };
        WCHAR strBaseDllName[MAX_PATH] = { 0 };
        if (lstEntry.FullDllName.Length > 0)
        {
            bRet = BOOLIFY(ReadProcessMemory(hProcess, (LPCVOID)lstEntry.FullDllName.Buffer, &strFullDllName,
                lstEntry.FullDllName.Length, &ulBytesRead));
            if (bRet)
            {
                wprintf(L"Full Dll Name: %s\n", strFullDllName);
            }
        }
 
        if (lstEntry.BaseDllName.Length > 0)
        {
            bRet = BOOLIFY(ReadProcessMemory(hProcess, (LPCVOID)lstEntry.BaseDllName.Buffer, &strBaseDllName,
                lstEntry.BaseDllName.Length, &ulBytesRead));
            if (bRet)
            {
                wprintf(L"Base Dll Name: %s\n", strBaseDllName);
            }
        }
 
        if (lstEntry.DllBase != nullptr && lstEntry.SizeOfImage != 0)
        {
            wprintf(
                L"  Dll Base: %p\n"
                L"  Entry point: %p\n"
                L"  Size of Image: %X\n",
                lstEntry.DllBase, lstEntry.EntryPoint, lstEntry.SizeOfImage);
        }
 
    } while (pLdrListHead != pLdrCurrentNode);
}

Code

The Visual Studio 2015 project for this example can be found here. The source code is viewable on Github here.
This code was tested on x64 Windows 7, 8.1, and 10.

Follow on Twitter for more updates

Categories: General x86, General x86-64, Programming Tags:

Common Types of Disassemblers

July 23rd, 2015 2 comments

The point of a disassembler is to take an input series of bytes and output an architecture-specific interpretation of those bytes. For example, a typical disassembler targeting the x86 architecture will take the following bytes: 55 8B EC B8 FF 00 00 00 33 DB 93, and produce a readable representation of those bytes similar to below:

55                   push        ebp  
8B EC                mov         ebp, esp  
B8 FF 00 00 00       mov         eax, 0FFh  
33 DB                xor         ebx,ebx  
93                   xchg        eax,ebx  

The process involves looking at the opcode(s), getting the instruction length, parsing out extra information in the instruction such as displacements, relative/absolute destinations, register/memory affected, etc. — basically a large amount of lookups and parsing. Fortunately, there are libraries for this. The disassembly engine used in this example will be BeaEngine due to its simplicity. Capstone Engine is also a great engine that supports many architectures, a clean and thread-safe API, and a permissive license among other things. After all of this is implemented, the actual challenge of parsing executable files comes into play. This issue will be the topic of this post.

There are two common ways of disassembling a file: linearly and recursively. In the case of linear disassembly, the disassembler begins reading the first instruction at an address in the binary and continues reading until some termination condition, a termination condition being a set amount of instructions decoded, the end of a block, or an error condition such as an unknown opcode. The code for linear disassembly is straightforward and is shown below. The termination condition in the example code will stop printing when a RET instruction is hit.

DISASM disasm = { 0 };
disasm.EIP = (UIntPtr)pStartingAddress;
 
int iLength = UNKNOWN_OPCODE;
do
{
    iLength = DisasmFnc(&disasm);
    fprintf(stdout, "0x%X -- %s\n",
        disasm.EIP, disasm.CompleteInstr);
 
    disasm.EIP += iLength;
 
} while (!IsRet(disasm.Instruction) && iLength != UNKNOWN_OPCODE);

The “algorithm” is (very) easy to write, and with knowledge into the format of the file being disassembled proves to be pretty reliable. For example, the Portable Executable (PE) format on Windows provides information on all executable sections and their sizes on disk and in memory with alignment. The ELF format on Linux provides the same relevant information. Using this information, a disassembler knows the exact range to disassemble to produce reliable output. The major drawback with this technique is that there is no reliable way to separate useless code from executing code. Any unused code/data inserted intentionally (or not) into the target area to disassemble will be listed. Looking at this in an assembly dump usually sticks out because the instructions will be nonsensical relative to surrounding code. Also any use of instruction interleaving, i.e. a jump into the middle of an instruction — usually for obfuscation purposes — will be missed by the disassembler.

The second type of way to disassemble a file is to do it recursively, that is to say that the disassembler will (try to) follow the control path of the actual program. The involves analyzing the destinations of any control flow instructions: calls, jumps, and returns. For every CALL instruction encountered, the address of the next instruction must be pushed on a stack, and the disassembly continues on at the CALL address. This continues on, recursively if need be for multiple CALLs, until a RET instruction is hit. Once a RET instruction is hit, the top of the call stack is popped off and disassembly continues on from that point. This is pretty much exactly how execution happens in a program. Also, for every unconditional jump instruction, the disassembly merely continues at the target destination. The sample code is a bit more complex, but not by much

DISASM disasm = { 0 };
disasm.EIP = (UIntPtr)pStartingAddress;
 
int iLength = UNKNOWN_OPCODE;
 
do
{
    iLength = DisasmFnc(&disasm);
    fprintf(stdout, "0x%X -- %s\n",
        disasm.EIP, disasm.CompleteInstr);
    if (IsCall(disasm.Instruction))
    {
        m_retStack.push(disasm.EIP + iLength);
        disasm.EIP = ResolveAddress(disasm);
    }
    else if (IsJump(disasm.Instruction))
    {
        disasm.EIP = ResolveAddress(disasm);
    }
    else if (IsRet(disasm.Instruction))
    {
        if (!m_retStack.empty())
        {
            disasm.EIP = m_retStack.top();
            m_retStack.pop();
        }
        else
        {
            break;
        }
    }
    else
    {
        disasm.EIP += iLength;
    }
 
} while (iLength != UNKNOWN_OPCODE);

This technique has its own benefits and drawbacks. The major benefit is that (theoretically) only exectuable code will be disassembled. This means that only relevant and executing code will be shown to the user. Also, the approximate or exact number of instructions to disassemble does not need to be known like in the linear technique. With recursive disassembly, you provide starting set(s) of instructions and then begin tracing control flow into those. Obfuscation techniques such as instruction interleaving will also be discovered. This technique does have a major drawback, however. CALLs or JMPs made indirectly cannot be deciphered. For example, the destinations of instructions such as JMP [ESI+0x4], CALL EBX, CALL [0xAABBCCDD] where 0xAABBCCDD contains an import fixed up at runtime, and so on, cannot be followed with the disassembler. This means that there are a lot of edge cases to consider when encountering instructions such as these in terms of knowing where to go next and making sure that the call stack is consistent.

The sample code provides a trivial implementation of both of these techniques. To see how it performs, there are also two functions provided. TestFunction1 demonstrates how a recursive disassembler follows control flow. Compare the two outputs:
Linear

0x1146670 -- call dword ptr [0114B008h]
0x1146676 -- ret

Recursive

0x1146670 -- call dword ptr [0114B008h]
0x754218E0 -- mov eax, dword ptr fs:[00000018h]
0x754218E6 -- mov eax, dword ptr [eax+24h]
0x754218E9 -- ret
0x1146676 -- ret

The second example, TestFunction2, shows how the recursive disassembler skips over instructions that are not executed.

0x66680 -- push ebp
0x66681 -- mov ebp, esp
0x66683 -- mov eax, 000000FFh
0x66688 -- call 000666AAh
0x6668D -- xor ebx, ebx
0x6668F -- xchg eax, ebx
0x66690 -- jmp 000666B1h
0x66692 -- cmp ecx, AABBCCDDh
0x66698 -- push 00000000h
0x6669A -- push 00000000h
0x6669C -- push 00000000h
0x6669E -- push 00000000h
0x666A0 -- call dword ptr [0006B0A0h]
0x666A6 -- pop ebp
0x666A7 -- mov esp, ebp
0x666A9 -- ret

Overall, each approach has its benefits and drawbacks. With good knowledge of an executable files format, a linear disassembler works perfectly fine for showing a disassembly listing. Typically, disassemblers with a focus on code analysis, i.e. IDA Pro, will use a recursive approach and have a sophisticated analysis engine to complement it.

The Visual Studio 2015 RC project for this example can be found here. The source code is viewable on Github here.

Follow on Twitter for more updates

Categories: General x86, General x86-64, Programming Tags: