DiceCTF 2026 - garden

pwn/garden was the most sophisticated binary exploitation challenge I wrote for DiceCTF Quals 2026. During the 24-hour online CTF, players were tasked with exploiting a custom garbage-collected heap and achieve remote code execution on our linux servers. Players were provided the C source code for the virtual machine as well as the compiled executable binary and libc used on the server to solve this challenge. In this author writeup, I will explain the inspiration and mechanisms surrounding this binary exploitation challenge.

The VM

This challenge was inspired by the b-jvm, an open-source Java Virtual Machine implementation that I worked on with my friends Timothy and Alec. The b-jvm uses a custom tracing, moving garbage collection algorithm: first, the GC identifies all reachable references, and then compacts the heap, updating all object pointers to their new locations. My challenge provided a simple garbage-collected bytecode VM based on the b-jvm heap implementation.

As a mitigation, the VM uses a memory cage: VM references are stored as offsets to a memory-mapped 32-bit address space. That way, memory accesses are contained within a reserved address range, isolating the VM heap from the rest of the program’s memory. All objects followed the same header format, with fields describing their size and payload type (storing either numbers or references to other objects):

typedef uint32_t ref_t;
typedef struct {
    uint16_t length;
    uint16_t mark_word; // object type indicated by LSB
    ref_t entries[];
} arr_header_t;

To interact with external memory, the VM also supports a custom “off-heap” object type, whose payload stores a raw pointer with bounds checking. This allows raw memory to be safely accessed through special VM bytecodes, which prevents out-of-bounds access.

typedef struct {
    arr_header_t header;
    uint32_t *data;
    size_t obj_size;
} off_heap_obj_t;

The heap cage mitigation was meant to contain memory corruption, preventing attackers from escalating a memory management bug into full control over the program’s memory. However, this off-heap buffer feature still allows users to interact with direct memory pointers. If attackers could forge malicious off-heap object headers on the heap, they would obtain arbitrary read/write on the host computer. In order to exploit the heap, we need to understand how its garbage collector works.

The Garbage Collector

At a high level, the VM’s moving garbage collector works by collecting a list of all reachable references, and then updating all pointers to their new memory locations, compacting the heap for further use. In the first phase, the GC recursively marks all reachable objects from a list of root nodes, and adds them to an array list:

typedef struct gc_ctx {
    sized_object *objs;
    size_t obj_count;
    ref_t *new_locations;
} gc_ctx_t;

static void mark_reachable(ref_t ref, gc_ctx_t *visited) {
    arr_header_t *arr = AS_ARR_HEADER(ref);
    if (arr->mark_word & 0x2) { // already marked
        return;
    }
    arr->mark_word |= 0x2; // mark it
    // add to visited list
    visited->objs = realloc(visited->objs, sizeof(sized_object) * (visited->obj_count + 1));
    visited->objs[visited->obj_count++] = (sized_object) { .object = ref, .size = arr->length };
    if (arr->mark_word & 1) { // contains pointers
        for (uint16_t i = 0; i < arr->length; i++) {
            ref_t entry = arr->entries[i];
            if (entry != 0) // skip null pointers
                mark_reachable(entry, visited);
        }
    }
}

After collection, objects are sorted by pointer value, so that they can be compacted downwards monotonically. By relocating the objects in the same order they appear on the heap, we can ensure that objects never move upwards or overwrite important data. As a last step, we update all references in the VM to point to their new memory locations. Here is the (messy) code for the garbage collection algorithm:

void garbage_collect(ref_t *roots, size_t num_roots) {
    // Simple mark-and-sweep garbage collector
    // mark phase
    gc_ctx_t ctx = {0};
    for (size_t i = 0; i < num_roots; i++) {
        mark_reachable(roots[i], &ctx);
    }
    // sort the pointers so we can compact the heap
    qsort(ctx.objs, ctx.obj_count, sizeof(sized_object),
          (int (*)(const void *, const void *))compare_sized_objects);
    
    // create lookup table for new locations
    ctx.new_locations = malloc(sizeof(ref_t) * ctx.obj_count);
    // copy objects to new locations
    size_t new_heap_used = 128; // reserve first 128 bytes
    for (size_t i = 0; i < ctx.obj_count; i++) {
        arr_header_t *arr = AS_ARR_HEADER(ctx.objs[i].object);
        arr->mark_word &= ~0x2; // unmark for next GC
        size_t total_size = sizeof(arr_header_t) + ctx.objs[i].size * sizeof(ref_t);
        memmove(heap_base + new_heap_used, arr, total_size);
        ctx.new_locations[i] = (ref_t)new_heap_used;
        new_heap_used += total_size;
    }

    // update pointers on heap
    for (size_t i = 0; i < ctx.obj_count; i++) {
        arr_header_t *arr = AS_ARR_HEADER(ctx.new_locations[i]);
        if (arr->mark_word & 1) { // contains pointers
            for (uint16_t j = 0; j < arr->length; j++) {
                // binary search for new location
                ref_t old_ref = arr->entries[j];
                ref_t new_ref;
                if (old_ref == 0) {
                    new_ref = 0; // null pointer
                } else {
                    sized_object search_key = { .object = old_ref };
                    sized_object *found = bsearch(&search_key, ctx.objs, ctx.obj_count,
                                        sizeof(sized_object), compare_sized_objects);
                    new_ref = ctx.new_locations[found - ctx.objs];
                }
                arr->entries[j] = new_ref;
            }
        }
    }
    // update pointers in roots
    for (size_t i = 0; i < num_roots; i++) {
        ref_t old_ref = roots[i];
        ref_t new_ref;
        if (old_ref == 0) {
            new_ref = 0; // null pointer
        } else {
            sized_object search_key = { .object = old_ref };
            sized_object *found = bsearch(&search_key, ctx.objs, ctx.obj_count,
                                sizeof(sized_object), compare_sized_objects);
            new_ref = ctx.new_locations[found - ctx.objs];
        }
        roots[i] = new_ref;
    }

    heap_used = new_heap_used;
    free(ctx.objs);
    free(ctx.new_locations);
}

The Bug

The correct relocation of objects during garbage collection relies on the assumption that the heap is always being compacted downwards: objects should never move upwards and corrupt other parts of the heap. Intuitively, these assumptions seem reasonable: the set of reachable objects is always a subset of all objects allocated in the heap, and we relocate them in increasing pointer order. However, this invariant does not actually hold in our VM.

Null references are special-cased in our VM and don’t actually correspond to any allocation, but our mark_reachable algorithm does not exclude them during traversal. This means that a null pointer can get relocated during garbage collection, occupying space that was never allocated in the first place. If the heap is already tightly packed, then introducing this fake object during garbage collection forces subsequent objects to shift upwards. Because we only move one object at a time, each memmove operation would overwrite the first four bytes of the next object.

Original heap:
[ length ][mrk_word][        a         ][ length ][mrk_word][        b         ]
\-------------- Object 1 --------------/\-------------- Object 2 --------------/

Objects to relocate: [null, Object 1, Object 2]

Place null object:
/------ null ------\
[    0   ][    0   ][        a         ][ length ][mrk_word][        b         ]
\-------------- Object 1 --------------/\-------------- Object 2 --------------/

Move Object 1 upwards:
                    /-------------- Object 1 --------------\
[    0   ][    0   ][    0   ][    0   ][        a         ][        b         ]
\------ null ------/                    \-------------- Object 2 --------------/

Move Object 2 upwards:
                                                            /-------------- Object 2 --------------\
[    0   ][    0   ][    0   ][    0   ][        a         ][        a         ][        b         ]
\------ null ------/\-------------- Object 1 --------------/

Resultant heap:
                    [ length ][mrk_word][        a         ][ length ][mrk_word][        b         ]
[    0   ][    0   ][    0   ][    0   ][        a         ][        a         ][        b         ]
\------ null ------/\-------------- Object 1 --------------/\-------------- Object 2 --------------/

After garbage collection, Object 2’s header got replaced by a user-controlled payload value! This means can forge custom signatures for objects by placing a crafted payload before our target object on the heap, and then garbage collecting with a null pointer on the heap to trigger the memory corruption.

An exploit can use this technique in two ways: it can convert an existing off-heap object into a numerical array in order to leak off-heap pointers, and it can also convert a user-constructed number array into an off-heap object, creating arbitrary read/write access. I won’t explain the exploit path in detail here, but users can leverage these primitives in order to bypass ASLR and construct a ROP chain to obtain RCE and get the flag.

The Mistakes

My challenge had several quality control problems. The code wasn’t very well-structured, so it had a bunch of unnecessary overcomplications (such as the struct sized_object). Additionally, the alloc_off_heap function had a silly logic error which miscalculated the on-heap size of an off-heap object, allowing users to corrupt the heap using integer overflow directly. In the future, I’ll need to make sure to get proofreading on my challenges and ensure that there aren’t any unintended solutions in the challenge.

The Conclusion

Overall, 85 out of 400 teams solved the challenge within the first 24 hours. Despite the technical problems, I think that pwn/garden was a very fun challenge. garden provides a look into how custom heaps, garbage collection, and mitigations interact in a realistic environment, similar to that of a real-world JVM implementation. I hope to create more unique and advanced challenges in the future. Also, the flag contains a mandatory sponsored message by FizzBuzz101:

dice{m1tig4ti0ns_nev3r_w0rk_y0u_escap3d_th3_w4ll3d_g4Rd3n}