Brute Ratel v0.5.0 (Syndicate) is now available for download and provides a major update towards several features and the user interface of Brute Ratel. Commander comes with a new user interface providing a much more granular information on the metadata of the C4 features which can be seen in the figure below.
Several parts of Badger like hunting for exported functions from loaded DLLs, x64 shellcode loader, reflective and object file loader were rewritten from scratch to provide backwards compatibility to older systems like Windows 7, Server 2003 and XP service pack 3 as well as to keep memory artefacts hidden from EDRs and AVs. This allowed us to write the loader in such a manner which opened up possibilities to build badgers for x86 and arm architecture which you would see in the future releases of Brute Ratel. I have listed the technical details of the release below, however a detailed list on the features and bug fixes can be found in the release notes.
Badger now comes with a new loader for shellcode and reflective DLL which is fully written from scratch. The previous version had a bug where the loader crashed on some windows 7 and server 2008 versions. The original badger loader which was shipped till version 0.4.2, never used GetProcAddress to find the exported functions. It used a custom assembly code which loaded a DLL and extracted the exported functions by parsing the DLL PE manually. However, Microsoft does tend to use proxy DLLs for exported functions (forwarded exported functions). This means that if an exported address exists in a DLL in windows 7, it might exist in a different DLL on Windows 10, Server 2008 and other windows versions. This led to a problem where the badger’s shellcode and reflective loader worked flawlessly on Windows 10 and later hosts but crashed in some version of Windows 7 and Server 2008.
However, this is no longer the case anymore. The new loader has several dynamic capabilities to find proxy DLLs (forwarded exported functions) and load them dynamically without having to worry about changes in the DLLs in different versions of Windows. This means no more usage of GetProcAddress. The new loader is written using inline assembly so as to lower the size of the shellcode and the reflective loader.
The reflective loader was also re-written from scratch to avoid leaving any type of artefacts in memory like the initial shellcode stage created from process injections or RWX regions left out from the DLL’s internal loader. I explained this in one of my earlier post on how to clear that portion of the memory. So now when you try to load any reflective DLL into a remote process, it loads up the reflective DLL and splits each section of the PE into different memory regions and classifies them seperately as RX, RW, R or WC permissions depending upon the section’s bit flag that has been set.
This release added a most requested PowerShell payload feature. This payload is basically a PowerShell script which contains the badger’s shellcode which is loaded and executed in memory using unsafe native methods. A PowerShell payload script can be created from the Payload Profiler drop down menu or by right clicking the listener and creating a PS1 payload.
Windows Remote Management (WinRM) is the Microsoft implementation of WS-Management Protocol, that allows communication with different operating systems over RPC. WinRM supports scriptable objects which can be used to automate several administrative tasks by System admins. In order for this feature to work, you would require a privileged badger since WinRM does not support the concept of impersonated user tokens. This feature also requires that the target/remote system has WinRM service enabled and is allowed by the firewall to communicate over ports 5985 (HTTP) and 5986 (HTTPS). This can be validated using the new portscan
command of badger. After validating the WinRM port, the pivot_winrm
command can be used to create a remote session, load the badger’s shellcode into the remote system and return a HTTP/SMB or a TCP shell. This command fully supports the loading of custom payload configuration, modifying the configuration into shellcode and loading them into memory for execution.
The above figure shows the initial scan on the port 5985 on the Domain Controller (BRDC01 - 172.16.203.131). After validating the WinRM port, we can use an administrative badger (b-0), to pivot to the remote host and load our shellcode. The pivot_winrm
command can launch a payload from a given payload configuration stored in the Payload Profiler and run it on a remote host. In the above figure, an SMB payload was executed which started a named pipe on the remote host and a connection was made to the named pipe using the pivot_smb
command. As soon as we connect to the named pipe, we should see a badger from wsmprovhost.exe which is usually responsible for launching winrm connections on every host. In the future versions of badgers, we are planning to have an option for a custom parent and child process on the remote host in order to avoid having any detections based on wsmprovhost.exe.
Windows Management Instrumentation (WMI) is Web-Based Enterprise Management (WBEM) solution, often used by administrators to manage servers and computers across an Active Directory environment. It is based on the Common Information Model (CIM) industry standard which uses a structured query language known as WQL to manage different components across a network over RPC. This release introduced 4 additional commands to use WMI locally or remotely in memory. Usually, WMI is executed via powershell or wmic.exe, but Microsoft provides COM DLLs which can be used to interact with COM objects. Badger provides set_wmiconfig
, get_wmiconfig
, reset_wmiconfig
and wmispawn
to configure the wmi namespace, domain, username and password to interact with remote system. The below figure shows an unprivileged badger which does not have any privileges on the DC (BRDC01). The default configuration of WMI in badger is set to use the “ROOT\CIMV2” namespace with no username or password, which means it will use the current process token to run WMI queries on the local system.
The namepsace can be configured to something like “\\<hostname>\root\cimv2” along with credentials using the set_wmiconfig
command. Once this has been configured, all queries performed using wmispawn
will use this configuration. The below figure shows the badger (b-0) as a low privileged user (vendetta) on the host BRVM01. This user does not have administrative privileges on the DC (BRDC01). However, once the credentials and namespace are configured, the ‘wmispawn
’ command can run queries on the remote host (BRDC01).
Several parts of LDAP Sentinel were modifed in this release. This code which was initially written in C++ and compiled with Visual Studio’s clang compiler, is now re-written in C with the MingW compiler. This allowed us to lower the size and change the entrypoint of the reflective DLL which gets loaded in memory from 250KB to 33Kb. LDAP Sentinel also comes with a new update which provides an option to run raw LDAP queries on any domain/forest of your choice. All queries will run in memory using the ActiveDS WinAPIs. The below figure shows an LDAP query running on the bruteratel.corp domain to find all users whose passwords are set to ‘Never Expire’. The future versions of LDAP Sentinel will include several built-in LDAP queries so that users wont have to write most common LDAP queries manually.
One of the biggest addition to Brute Ratel in this release was Mimikatz. Mimikatz requires privileged process (high integrity) to run it’s commands. Badgers can load mimikatz’s reflective DLL to perform any and all of the mimikatz commands in memory. The below figure shows that a WMI Namespace (Root\Microsoft\Windows\Defender) was configured to find the Windows Defender Status. Running mimikatz with “privilege::debug sekurlsa::logonpasswords” shows that our reflective DLL was launched in process werfault.exe and returned the NTLM hashes. The mimikatz
command is not just limited to dumping hashes. They can also run other commands like PTT, PTH, Kerberoast and more.
One important thing to remember is that the Mimikatz module accepts 2 types of arguments. One type is where we can specify the commands in the command-line without quotes like ‘mimikatz privilege::debug sekurlsa::logonpasswords’. In this type, all the commands are individual commands and not subcommands of an existing command. But there could be cases where you might want to run subcommands of a mimikatz module. For example, the lsadump command of mimikatz accepts dcsync’s configuration as additional arguments. In such cases, we would have to double quote the subcommands: ‘mimikatz “lsadump::dcsync /domain:bruteratel.corp /user:vendetta”’. In order to mix normal commands with subcommands, we can do it like this: ‘mimikatz privilege::debug “lsadump::dcsync /domain:bruteratel.corp /user:vendetta”’. Notice how privilege::debug is not double quoted whereas the other command is double quoted because the arguments like /domain and /user are subcommands of the lsadump module. So, keep this in check when you are planning to run additional command-line arguments with the mimikatz module.
In this release, Badger ships 3 different types of DCSync. The initial one is the one embedded in the Mimikatz’s DLL itself. The other 2 are standalone DCsync commands: dcsync
and dcsync_inject
.
The mimikatz command can be used to inject a reflective module to run DCSync. This command requires a privileged badger because the injected process uses the parent process’s token. This means that if the parent process (badger) is unprivileged, then the spawned child process will also be unprivileged and cannot use any impersonated tokens. The dcsync command from mimikatz can be run as shown in the figure below.
The dcsync
command can be used with an impersonated token created with the make_token
command. This command takes one username argument as input. If no argument is provided, then it will request NTLM hashes for all the users in the domain. This command does not inject anything and all the DC replication requests are performed from the badger’s process itself.
The dcsync_inject
command injects a reflective DLL into a remote process similar to ones like mimikatz. The core difference between this command and the one with mimikatz is that unlike the mimikatz reflective DLL which ships with tonnes of other features, this command only loads a small reflective DLL consisting of DCSync requests into a remote process. This also means that impersonated tokens cannot be used with this command and a high integrity (privileged) badger is required for this. This command takes one username argument as input similar to that of dcsync
command. If no argument is provided, then it will request NTLM hashes for all the users in the domain.
The newly introduced portscan
command can scan a given IP Address for open ports using a full connect TCP request. One or more port numbers can be provided as an argument seperated by space to scan multiple ports.
The netshares
command can be run with or without parameters. It can take two optional arguments. The first argument is the host you want to scan. If you don’t specify a host, it will scan localhost. The second optional argument is ‘/privs’. If you specify this argument, then badger will check whether it has administrative privileges on the host. But unlike most share enumeration tools which try to check privileges on the admin share, this command performs some magic without touching the admin share C$. This helps to avoid the usual detections techniques while checking for privileges on the remote host at the same time. The below figure shows an unprivileged query to the domain controller (BRDC01) with ‘/privs’ argument which returns Error 5 which stands for GetLastError access denied.
The psreflect
and sharereflect
command now comes with built-in loaders which patch ETW and AMSI before loading any CLR DLLs. This allows users to use any open source tools like Seatbelt, Rubeus, Sharphound in memory, which otherwise would’ve been flagged as malicious. There was also a small bug in the psreflect
and sharpreflect
command where upon loading v2.0 CLR DLLs, there was a prompt to install dot net 2.0 to the user. This doesn’t happen anymore since this bug has now been fixed. The below figure shows whether the AMSI and ETW patching were successful during the presence of a Windows Defender ATP.
Click Scripting is a feature which allows users to automate execution of bulk commands. Unlike the ‘Autoruns’ feature which lets a user to auto-execute several commands on the first connection of badger, Click Scripts are basically a list of multiple commands which can be chained together to execute one command after the other at any point of time. This helps with automated execution of commands belonging to different Tactics and Techniques of MITRE ATT&CK which can be pretty useful during Purple Team engagements. Below is an example of some discovery based commands which are grouped into a single click script called ‘Discovery’.
To add a new click script, select ‘C4 Profiler->Clickscripts’. This will open a new dialog box where we can add a new script using the ‘+’ icon. Once a script script has been added, new commands can be added to it by selecting the script and then clicking on the button highlighted in the below figure.
After adding the scripts, the Click Script Runner can be loaded by right clicking a badger and selecting ‘Load ClickScript’. This will open a new tab where different scripts can be run by a single click as shown in the earlier figure. Click Scripts can also be added directly into the C4 profile in a simple key value format as below.
{
"click_script": {
"Credential Dumping": [
"samdump",
"shadowclone",
"dcsync"
],
"Discovery": [
"id",
"pwd",
"ipstats",
"psreflect echo $psversiontable",
"net users",
"scquery"
]
}
}
The first difference you would notice in v0.5 when starting the Commander is the user interface. The Operators table, Chat box and the Scratchpad are moved next to the event logs since I realized that Event Logs don’t need as much space. The notification box below the event logs contain the Last Web Activity seen on the listener. The bottom part of the Scratchpad contains the Badger Command Queue which can be accessed from the buttons on the top of the Scratchpad. Downloads and Server Logs are now moved to the ‘Server’ drop down menu. Upon accessing them, you can select whether to download or view logs/screenshots directly from there. The logs get loaded up in the Scratchpad, whereas the screenshots will open up in their respective screenshot viewer. Downloads now support a broadcast feature. As soon as new files are downloaded, the downloads tab will popup automatically to notify the user about the download completion. This whole portion is now called as ‘Watchlist’.
The panel on the top of the Scratchpad contains a few buttons which can be used to view/search/save server/badger logs, view psexec configuration or badger’s queued commands.
The Commander also comes with a new ‘Autosave’ button next to the ‘Server’ drop down menu which can be enabled or disabled to automatically save changes to your C4-profile as you modify your C4 settings during your Red Team Engagement.
The Commander and the Badger come with several major changes as to how the error prompts are returned to the user. Earlier the message boxes used to get prompted to the user for errors which used to close an existing input pop up box. This behaviour doesn’t happen anymore and the errors are returned directly to the user within the input box itself. Similarly, one-word commands like samdump and shadowclone are now removed from the right-click context menu of the badger since they can be directly accessed from the badger’s terminal. The statistics are also now moved to the bottom right part of the Commander. You can find the detailed information in the release notes here. To update your Brute Ratel package using your activation key, use the -update
argument in the ratel console.