I recently made public a project I've been working on for a few months now, called MoonMon. In this post—which I hope will become a series—I'm going to detail why I started this project, my plans for it, and what I've learned so far.
I've had a strong interest in exploit development and vulnerability research for a long time. Last year, I was able to take on the Windows User Mode Exploit Development (WUMED) course and subsequently pass the OSED exam.
After the exhausting ordeal that was the OSED exam, I wanted a change of scenery, so to speak. I looked into heap exploitation, exploit development on non-Windows platforms, and kernel-mode exploit development.
Around this time, I developed a strong interest in EDR evasion and bypasses. I've done some real-world red teaming, which has helped me keep up the skills I learned from the OSCP training I took a few years ago.
As part of that, I had to evade security tools. A handful of times, I've run into scenarios that required evading an EDR or an AV, and often I was able to get past the obstacle after much lab-testing and PoC'ing my payloads. But a few times, I relied on the research and experience other Red Teamers shared with me.
Sometimes it was a simple trick; other times, hand-crafted and tailored payloads—dark magic, at least to my novice eyes!
Earlier this year, I bought a copy of Matt Hand's excellent book, " Evading EDR: The Definitive Guide to Defeating Endpoint Detection Systems," and gave it a read. It was a true eye-opener. I highly recommend this book for anyone practicing red teaming in a Windows environment.
Callbacks, mini-filters, ETW telemetry, AMSI scanning, and more. Matt expands on how sensors collect telemetry, how the EDR architecture functions, and what weaknesses to look for. I wanted a deeper understanding of kernel programming at this point, so I took on Pavel Yosifovich's "Windows Kernel Programming" book, which, in my opinion, is a decent complement to "Evading EDR".
Pavel's book focuses more on how to write a software-only driver (like Sysmon or EDR drivers), offering plenty of in-depth code examples, tips, and tricks.Great as these books are, I feared I would end up forgetting most of what I'd learned. This was also around the time when I started reading blog posts and papers on kernel-mode exploitation.
A steep learning curve without a doubt. But fortunately, the folks at Hacksys put together HEVD. HEVD is a driver that comes with intentionally crafted vulnerabilities a student of exploit development can use to learn and practice kernel-mode exploitation.
I felt a bit intimidated at tackling HEVD without having some kernel-mode programming experience. I highly valued all the time I'd spent in windbg and IDA as part of the WUMED course. Driver writing would also help me retain all of that debugger muscle memory and instincts that can only be learned after lots and lots of debugging sessions, bug-checks (BSODs for the uninitiated), and figuring out undocumented side effects.
I also wanted to hone my C development skills. On top of that, I'd gotten a bit weary of certification exams and courses, and I wanted to spend some time exploring different attack techniques, working on interesting projects, and publishing my work someday.
All this to say that MoonMon was born out of an intersection of multiple needs I had:
On that last item, I've complained much over the years as an incident responder and detection engineer about why some of the endpoint security tools I'd used couldn't do things a specific way. Why aren't more details and settings exposed to me as the person who reviews the logs or configures the security tool? Why can't I do AppArmor or SELinux style restrictions on Windows without paying for a commercial EDR?
Why can't I tune the logging, performance, caching behavior, network utilization, and more to fit my needs? I knew that there were good reasons for some of these questions I had, but without actually seeing under the hood how these tools worked, it is hard to determine if these mind-itches I needed to scratch were legitimate missing features or a result of some underlying complexity or limitation I didn't know about.
Needless to say, MoonMon was almost a necessity for where I was in my hacker journey.
The vision behind MoonMon is to expose monitoring and policy enforcement capabilities as part of a free and open-source software.
The need I had was just that: free software I can use to monitor the Windows systems I'm using, both for security threats and general application behavior monitoring.
I also wanted a lot more control over my Windows systems. Linux had spoiled me quite a bit with the amount of control it permits. But also, using commercial endpoint security tools has left a lasting impression on me.
I'm also of the opinion that while threat intelligence products and detection/prevention content being commercial and proprietary makes sense, the actual capabilities to operationalize such content should be exposed to users freely. If MoonMon matures enough, and enough people find it useful, my hope is that it can use public and crowdsourced threat intelligence to detect and/or prevent real-world threats someday. Not as a replacement for existing anti-virus or EDR tools, but to complement them, or to be used by users who can't afford them.
If you haven't already, check out "The big and long list of features and ideas!" in the project's Readme.
Before talking about the logging and prevention capabilities, it's important to describe the configuration capabilities it offers.
MoonMon takes a rule-centric approach. The user-space agent, aptly named Luna, is responsible for installing and configuring MoonMon, and processing configuration content defined in YAML configuration files.
Global settings allow defining various install-time parameters as well as run-time settings, such as enabling or disabling the various callbacks and filters the driver uses.
Various "lists" contain event-source-specific rules where matches have a predefined effect, such as blocking, logging, or excluding (from logging) matching events.
For example, it can be configured to block, but not log, only specific process-creation events for a specific process, such as a browser starting an unusual process, and nothing more. Global settings can disable all callbacks and filters except for the process creation callback, minimizing the impact on performance.
Or, it can be configured to log every supported event type, rotating logs as they fill up, with exceptions for the really noisy log-generating patterns.
It can be used to lock down a system by permitting specific allowlisted patterns of process creation, file activity (creation, tampering, opening), registry activity, process access attempts, and more.
The repository contains a samples directory that contains log samples as well as example configurations, which I plan to expand over time. The config directory by default also ships with a 'tests.yaml' configuration file, which I'm using to test and validate some basic assumptions I have about the functionality of various features, but it can also come in handy as an example configuration.
When I started working on this project, I knew it was going to be a significant undertaking—and it was—but a lot of the heavy-lifting and architectural decision-making now seems to be more or less complete.
My focus so far has been to write a somewhat stable driver and a stable user-space agent.
From the driver side of things, I've avoided using complex tricks or logic that makes it more difficult to read the code, or more difficult to refactor.
Most of the core driver features are implemented; now my focus is on hardening them while carefully making changes that would improve performance or simply reduce the amount of code. I want to do all this while expanding on the portfolio of tests I need to help reduce the potential of inadvertent bug-introduction.
At some point, I want to feature-freeze the driver and only touch it for security fixes, hardening, or supporting newer versions of Windows.
In the near term, I plan on working towards implementing some kernel-mode features such as KAPC injection rules, dynamic configuration reloading, and a more complete tamper-resistance capability. Aside from those, there are still some significant kernel-mode features, such as ETW telemetry which I won't get to for a while, as well as things like Early-Launch Anti-Malware (ELAM) certification and code-signing MoonMon. Much of that depends on other people finding the project useful, if MoonMon becomes production-ready, and if I gain strong confidence in its security posture, as well as finances and Microsoft's good graces.
I have many other features I haven't implemented yet that are best-implemented by user-space code. The user-space agent (Luna) is where I'm expecting the most changes and feature additions to happen.
Some interesting user-space features I've planned include YARA scanning (including using YARA rules as part of AMSI scanning), enrichments such as PE headers, file magic, Mark-of-the-Web, and more.
I also plan on experimenting with user-space-driven prevention capabilities, such as a YARA match resulting in process termination, or PE header details resulting in adding file-access restrictions.
There are a few ideas in particular I'm looking forward to researching and implementing the most.
Needless to say, I have many ideas and features I want to experiment with, but my plan is to work on MoonMon incrementally and at a slow-and-steady pace, since after all, I am currently working on it solo and my free time is limited.
I have been having lots of fun working on MoonMon, and now that the scaffolding is more or less there, I'm hoping incremental and atomic changes over time will result in something useful, not just to myself but to others as well.
Initially, I didn't really plan on making MoonMon public. My main goal was to write something similar to the endpoint tools I've grown to like, and then learn how to best evade, bypass, or find security vulnerabilities in it.
I have an unpublished project I've been working on to measure stress-test-driven benchmarks. But more importantly, I plan on reusing the code for stress testing to generate fuzzing harnesses for MoonMon. I've been learning how to do all of that properly, and I'm hoping to translate all of that work into vulnerability research skills.
With all of that—a decent backlog of exploit-development labs, books, and papers to read up on, and real life getting in the way—I could use a lot of help with this project.
So if you find it useful or interesting at all, star the project, send me a message on Bluesky, start an issue or a discussion on GitHub, or just share the word about MoonMon with others.
I could use lots of help with fixing bugs or adding features (PRs are most welcome!). But the help I need the most right now is with code review. This is my first time publishing a public project like this, and while I've written C for a long time (and Go for even less), it is very easy to make mistakes or come up with a bad architecture.
On top of that, writing driver code is a precarious effort. That is all to say that MoonMon needs lots of code review and improvements so that someday there will be enough confidence in it to consider paying for code-signing certificates and publishing a driver that loads without test-signing mode enabled.
I hope you'll find MoonMon interesting and potentially useful. Constructive criticism, feedback, bug reports, feature requests, and pull requests are all welcome.
I plan on creating similar blog posts as part of this series to expand more on MoonMon's internals, architecture, and all the quirks and tricks I've picked up while writing MoonMon and debugging it. Stay tuned!