An introduction on Control Flow Graphs and Integrity


It has been a longer time since I tackled the exploit mitigations on Linux. Nevertheless I felt like I should at least cover control flow graphs and control flow integrity as well to move on to new topics.
This research article will be a bit on the theoretical side to understand what CFG and CFI is all about.


  • A bunch of spare minutes
  • A basic understanding of what causes memory corruptions/information leaks
  • The will to ask or look up unknown terms yourself
  • knowledge from my previous 3 installments

Control flow integrity

The previous discussed exploit mitigations tried to deal with memory corruptions in general. They aim at minimizing the accumulated threats of arc injection attacks.
Due to the limitations of each proposal the goal could not be reached until now. That showed the need for yet another exploit mitigation technique to counter the ever newly developed attacks. To ultimately deny return to known code attacks, control flow graphs with an enforcement of a valid execution policy arose as a potential solution.


Enforcing control flow integrity (CFI) builds on creating a control flow graph (CFG) of an application prior to execution. Such a CFG consists of basic blocks of code [shown above] in which no jump, aka control flow transfer is taking place. These basic blocks are represented as nodes within a CFG. The creation can be done via a multitude of possibilities like source code analysis, binary analysis or execution profiling. Each methology has its own advantages/disadvantages and its needs to be considered which technique is used in the end. Potentially a hybrid variant of all 3 can be chosen if the performance decline by combining them is neglectable.

The higher goal of CFI is to make sure no taken execution path within an application is invalid or is constructed during runtime due to tempering with pointers, since the ultimate achievement for an adversary is to gain control over a system or program. Runtime monitoring prevents not present control flow transfers in the CFG of an application. That way intended and predefined valid transfers are enforced.
Branching instructions within a basic code block always indicate such control flow transfers. They are either direct or indirect.

To accomplish the prior mentioned points CFI modifies each basic block. These simply put can distinguished between ‘the source’, from which a transfer of control flow is happening and ‘the destination’ where the control flow is about to be redirected to. Changes are done for every block in the control flow graph. This is usually done using machine code rewriting to place a unique identifier or a bit pattern at the start of each destination block that functions as a name. At every source before the jump is happening a computation and check routine is inserted. It calculates a unique identifier or a bit pattern of the next code block (the destination) and checks the calculated value against a predefined value. If these two values match the real address of the destination code block is loaded into memory and the jump to that address is done. That ensures a safe control flow transfer. So when a control flow transfer happens, dynamic checking and testing is enforced to make sure the current execution path is still valid. The choice of the used/inserted machine code instructions is not trivial since these should be as efficient as possible and hence taken from the instruction set of the used architecture of the binary/application. An illustration of this technique on an x86 architecture instruction set is shown below:


Using a combination of control flow graphs and policy enforcement requires the fulfillment of three essential assumptions:

  • First the inserted IDs must hold the unique identifier property. That means that these labels are not allowed to be present anywhere in code memory at any point of time. This is realized by making the label patterns appropriately large (>=32bit) to avoid having a coincidental duplicate of any identifier present
  • Secondly it must be guaranteed that under no circumstances an adversary has a possibility to modify memory contents at runtime. Otherwise overwriting label checks easily disables this technique
  • Lastly it should not be possible to execute present data as code. If it were the case an adversary could use this to execute data that is labeled with a specific label

When fulfilling these assumptions CFI ensures that a valid path is taken from a given CFG during runtime resulting in an increased reliability of any CFG based technique. Moreover it can used to prevent a variety of attacks including code injection, code reuse and information leakage exploits. The drawback is that it heavily relies on the effectiveness and the completeness of a control flow graph, which defines the upper bound of precision and therefore the precision of the individual runtime checks.

As of right now, no real CFI implementation can be found by default in a Linux system, but are available in software frameworks like clang, an alternative compiler frontend for C based languages*

*: As of my current knowledge

Furthermore browsers like Chrome(ium) are compiled by default with CFI, since they are popular targets for exploitation.

Basic limitations

Control flow integrity draws in several weaknesses. Some of them caused by the control flow graph structure itself. This includes branch coverage, robustness or the ability to deploy them easily. The biggest disadvantage from the original proposal draws from the property that a CFG is created in advance of running an application. That causes the CFG to not be able to adapt to the dynamic nature of the stack, which leads to the disability to enforce functions to return to their most recent call site. Hence CFG uses another technique to compensate for this: the shadow stack, which places return addresses upon call in safe memory areas. Afterwards it compares the stored return address to the address, which the program is about to return to. The rise of just-in-time (JIT) compilers used in conjunction with scripting languages like JavaScript in many modern browsers increases the problematic scenario of generating a fitting control flow graph at runtime, since not all valid paths are known before execution. Moreover the code heap used by JIT compilers is usually simultaneously writable and executable to allow important optimizations such as inline caching making it a valid target for heap feng shui and heap spraying attacks to increase the chance of hitting a valid area to gain the ability to alter control flow. If an adversary succeeds in finding a single unprotected indirect branch he is able to compromise an entire process.

Ineffectiveness of coarse grained control flow integrity

Coarse grained control flow integrity as often used in recent proposals like kBouncer, ROPecker or ROPGuard comes at a prize for trying to relax the original CFI idea. The approach reduces the number of tags in a CFG to improve overall performance. It also defines new heuristics about how instruction sequences have to look like to be regarded as valid. This results in more execution paths marked as legal. On top of that coarse grained CFI solutions only verify return instructions for one purpose. They check if the return points to a call preceded instruction routine to make sure control flow is returned to a valid code area. Having this in mind it turns out that ROP is still a valid approach to bypass an advanced coarse grained control flow policy with truly strict rules. The applied rule set and used heuristics it was proven on can be viewed below. It was shown that with these properties applied, resulting in a strict rule- and a minimized instruction-set it is possible to compute a Turing complete gadget set. Moreover a clever usage of NOP-sleds bypasses predefined heuristics as well.

weakcfi [referenced paper at the end of the article]

This makes coarse grained control flow integrity vulnerable to a type of attack that is commonly used already. The drawbacks of CFGs and CFI in general pointed out before and the major weakness presented above sheds bad light on this not widely implemented mechanic already.

Weaknesses of fine grained Control Flow Integrity

As a counter part to coarse grained CFI, fine grained control flow integrity enforcement with an unlimited amount of allowed tags tries to ensure that the return address when leaving a function routine points to the original caller of a function. It is recovered from a shadow stack during the check process, similar to the procedure in coarse grained CFI. This technique suffers from the same weaknesses induced by the CFG generation and deployment as described above. On top of that it was proven that the tighter restrictions in fine grained CFI can be bypassed by a ROP attack variant. It leverages the fact that current implementations of CFGs, which rely on a sound and complete pointer analysis to build the graph are sound but incomplete due to algorithm trade-offs. This enables the finding a new ROP gadget class labeled Argument Corruptible Indirect Call Site (ACICS), which are pairs of Indirect Call Sites (ICS). They are used to target these functions within a program that enable remote code execution. They consist of two main properties:

  • ACICS are able to corrupt arguments of an ICS.
  • ACICS enable the execution of a targeted function when using the first property in conjunction with an also corrupted forward pointer.

Using these ACICS leads to a successful hijack of the control flow while staying within the the bounds of fine grained CFI, which makes it undetected.


Control Flow Graphs and Control Flow Integrity bring yet another security parameter to the table. As of time of this research both were not yet widely adopted yet and were just used in edge cases or very big/popular applications like chrome(ium) to ensure an additional hardening. It was shown that both fine and coarse grained CFI bring a lot of weaknesses too and that exploiting or bypassing them is already possible. Nevertheless we need to look at the whole picture again. If a software has CFI enforced it should have the other mitigations enabled as well. The combination of all of them already makes breaking a targeted software very tough. Not impossible but definitely time consuming. Additionaly keep in mind that a lot of shown PoCs on conferences papers were only possible in a specific lab environment.

As always if there are any questions or just casual feedback of any kind feel free to hit me up on any of the public channels displayed on this blog.