Software almost always has vulnerabilities. Many of these cause serious problems such as software crash and leaking sensitive user information. To fix bugs, software engineers have been fighting an endless battle with bugs. To date, the most known mechanism for testing against security vulnerabilities is grey-box fuzz testing (fuzzing) — a proven and practical bug detection methodology. At its core, fuzz testing uses a biased random search to uncover inputs that are likely to cause the program to misbehave (e.g., crash), thereby allowing bugs to be discovered and fixed before potential exploitation. In recent years, most software vulnerabilities are found using fuzz testing. Our research program seeks to build next generation fuzz testing technologies, specifically in the light of supply-chain attacks witnessed recently.
From a technical standpoint, this program will develop new techniques in testing and analysis, for detecting security vulnerabilities, specifically for concurrent, stateful and reactive software systems. Traditionally these systems have been checked via verification methods which store the state space in some form. Since common usage of the verification methods is in bug finding, we propose to develop smart fuzzing methods to validate stateful systems. Technically this will involve various innovations in (a) identifying state variables in programs (b) inferring stateful behavior and state machines even when state variables may not be accurately identified, (c) being able to fuzz concurrent systems to capture the space of interleavings and (d) designing test oracles and automated testing techniques to find various kinds of bugs in data-centric software. Being able to validate concurrent / distributed / stateful systems, allows us to deeply test the impact of a vendor provided component on (stateful) software. The proposed research is of importance in the context of recent well-known supply chain attacks, such as Solarwinds. Fuzz testing and binary analysis techniques can be employed to prevent such attacks (and others) - to mitigate their impact.
The research program also looks at sound statistical basis for comparing evaluation of fuzzer tools, so that practitioners can choose fuzzers which are useful for their specific application set-up to find vulnerabilities effectively. This can also impact the way fuzzer evaluations are currently conducted in well-known computational infrastructures including Google’s Fuzzbench.
Check our IEEE Software Article reflecting on fuzzing as a field. We summarize the open challenges and opportunities for fuzzing and symbolic execution as they emerged in discussions among researchers and practitioners in a Shonan Meeting and that were validated in a subsequent survey.
This research is supported by the National Research Foundation, Singapore, and Cyber Security Agency of Singapore under its National Cybersecurity R&D Programme (Fuzz Testing NRF-NCR25-Fuzz-0001). Any opinions, findings and conclusions, or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore, and Cyber Security Agency of Singapore.
- July 2023: New research project funded!
- April, 2022: Abhik Roychoudhury honored with IEEE New Directions Award