Symbolic execution with KLEE: From installation and introduction to bug-finding in open source software.

21st August 2020
David Korczynski,
Security Research & Security Engineering

In this post we present four videos that cover the KLEE tool. KLEE is a symbolic execution engine that can be used to automate test-case generation as well as be used to find bugs. It is open source and originates from academia, and has been actively maintained and developed for more than a decade. You can find the source code repository here. The maintainers of KLEE have also made an ecosystem around the tool that makes it easy to start using KLEE for example by having ready-made Docker images with installed versions of KLEE or even a web-based interface for experimenting with it. The main website for KLEE is found here.

The four videos are meant to be viewed in-order and they take an incremental approach to understanding KLEE. The first video covers installation and running KLEE on small hello-world applications, the second video explores how KLEE reacts to more example applications, the third video explores how KLEE reacts to vulnerable applications and the final video shows how to analyse four different open source projects with KLEE while displaying the process of finding bugs in those projects. This post is a wrapper around the videos and we will not go into any details here but rather summarise each video and use this post as glue for the videos.

Video 1: Introduction to symbolic execution with KLEE

The first video shows how to install the Docker image that comes with the KLEE source code and how to run KLEE from inside the Docker image. The video then goes through the steps needed for running a simple C application in the KLEE environment, namely how to use clang to extract the LLVM bitcode of the given file and then use KLEE to analyse the resulting LLVM code. The video shows how to use the klee_make_symbolic function to symbolise memory, how KLEE generates multiple test cases that explore different execution paths in the program and how we can use the KLEE ktest-tool to inspect the concrete values of the symbolic data. The video then goes through several of the examples from the KLEE website and shows what happens when KLEE detects a bug in the target program and how to do root-cause analysis based on the output of KLEE.

Video 2: Running six examples of diverse small C apps

The main purpose of the second video is to show how KLEE reacts to different types of C applications. The idea is to explore specific code constructs in a controlled manner, which can be useful to understand the scope of a program analysis tool. The specific aspect of KLEE that we investigate is its ability to explore different execution paths, and we specifically do not explore the bug-finding nature of KLEE. The video also shows how you can use the KLEE tool klee-replay to run a test-case against the executable version of the target program, which is in contrast to the LLVM bitcode that KLEE uses to analyse the target program.

Video 3: Capturing memory corruptions with symbolic execution

In the third video we explore the bug-finding nature of KLEE. We test KLEE against four example programs that contain vulnerabilities of, respectively, the types:

KLEE contains heuristics for detecting all of these vulnerabilities and we go through specific error reports generated by KLEE in more detail to show the process of doing root-cause analysis.

Video 4: Finding memory corruption bugs in OSS with symbolic execution

In the fourth and final video we go through the process of analysing four different open source projects with KLEE. The projects that we analyse are:

In the video we find a total of three bugs in the Tiny Regex library and the json.h library and all the bugs that we find are memory-out-of-bounds bugs. We also find several bugs in the utf8.h library but these turn out to be false-positives as the way we perform the symbolic execution does not satisfy how the utf8.h code is expected to be used. We also show the process of this in full detail, namely of doing it wrong at first to then correcting it after reporting the "bugs" to the maintainers.

In case you are interested in seeing the bug reports to the maintainers and the process of resolving them then the bug reports are available in the following links: