Prototyping
Defining the moving parts during a transaction in our C2
C2 Infrastructure
Generally, a command and control (C2) framework will have at minimum three components, the teamserver, client and implant. In production, we would want to add more moving parts e.g redirectors, stagers, etc. however we'll stick to just the bare minimum / basics for now.
Teamserver
The teamserver will manage asynchronously connections between the client and the implant- during a transaction, the operator will task the implant through the teamserver. The teamserver will then forward these tasks to the relevant implant(s) through listeners that can be spawned and bound to implants.
Middle-man between client and implant
Spawns listeners and implants connect to these listeners
The "brain" of the C2
Client
This is where the operator (usually either a GUI application, web application or CLI application) is able to interact with implants via the teamserver.
Sends task to teamserver with recipient implant tagged
Receives result from teamserver after task is complete
Easily replaceable
Implant
This is the payload delivered to the victim, normally you will dump all of your evasive TTPs on this as it will be dropped and run from disk.
Contains a database of opcodes which correspond to a task (e.g whoami, ls)
Periodically queries the teamserver for new tasks
If there are new tasks- execute them, else go back to sleep
High-level Diagram
This diagram shows what a regular transaction would look like, note that the specifics of how each task is run has been omitted for simplicity (e.g task queuing, asynchronous execution, task ID)
Send Task (Client -> Teamserver)
Client tells teamserver that he wants Implant A to run "whoami"
Teamserver acknowledges and forwards the task
Send Task (Teamserver -> Implant A)
Teamserver forwards task to the implant
Send Result (Implant A -> Teamserver)
After receiving the task, Implant A will execute the task based on their database of opcodes and send the result back to the teamserver.
Read Result (Client -> Teamserver)
The client will periodically query the teamserver for the result, if there is no response, the operator can do something else while waiting.
Task / Implant Collision
As there are many moving parts in a transaction, nearly every request will be tagged with a unique identifier. In order to faciliate multi-player mode (multiple clients, and multiple implants), unique identifiers for every implant will be issued as well as unique identifiers for each task transaction.
This is done to prevent a collision where an implant incorrectly executes a task that was not assigned to them, or the client tries to retrieve a task while another client makes another task.
Updated Diagrams
In the updated diagrams, we'll take a look at how unique identifiers (UIDs) can be used to help with tagging objects during the transaction:
Send Task (Client -> Teamserver)
In this updated model, each implant is tagged with a Beacon UID.
Send Task (Teamserver -> Implant)
Instead of sending a task directly to an implant, we'll simply update a stipulated endpoint designated for that Beacon UID.
Example: POST https://teamserver.com/tasks/<Beacon_UID>
We'll also return a Task UID to the operator for them to check the results periodically.
Example: GET https://teamsever.com/results/<Task_UID>
Get Task (Implant -> Teamserver)
The implant will periodically query the endpoint created for them for new tasks, and execute them if any.
Send Result (Implant -> Teamserver)
After executing the task and retrieving it's output, the implant will send the result back to the teamserver together with the Task ID
Read Results (Client -> Teamserver)
Similar to above, the client will periodically query the teamserver for the result of the Task ID.
Summary
The prototype for a transction in our C2 framework should look something like this, of course without taking into account any random changes we make along the way.
Reasons for Design Choice
The usage of task tagging and periodic querying of results rather than waiting for a callback (besides from the reasons stated above) is to allow for asynchronous task execution.
For example, if the operator tasks the client to perform an action that takes a long time to execute (for example, download a large file)- without task tagging and asynchronous execution, the operator will have to wait for each task to be completed before being able to continue with other tasks.
In this case, the implant would start a thread to run each task- and return the results as they come in.
As a result, the result of the second task will be returned earlier than the first, and subsequent tasks will be returned in order of speed of completion rather than one-by-one.
Considerations
This is not the best architecture or design for a C2 framework, however this is the approach I felt most comfortable doing :)
References
Last updated