What are the relative advantages and disadvantages of each form of testing?
I.e. What is the difference between static code analysis and runtime/dynamic penetration testing?
What are the pros and cons of each? Are there situations where one is preferable over the other?
- 72,708
- 22
- 137
- 218
-
To be clear, whitebox != static analysis, and blackbox!=runtime/pentesting, but lets assume for the sake of this question that is what is meant. – AviD Nov 14 '10 at 22:57
3 Answers
I believe this question is best addressed in chapter 4 of The Art of Software Security Assessment, a book by Mark Dowd, Justin Schuh, and John McDonald.
Without it as a reference, I can safely tell you that the best method is using runtime data along with source-code while determining "hits" (or traces, aka coverage) using black-box testing -- but only after a threat-model and general architecture of the system is well-known.
The authors also appear to like secure static code analysis when combined with candidate-point strategies, although it's my opinion that these can vary wildly in value assuming the following are not true:
- The language, and its base class libraries, must be supported by the secure static analysis tool
- The entire system must usually be available. I.e. it must include buildable source code including all third-party/contrib components and external libraries -- possibly even to include the system compiler, VM, or other artifacts of the original system
- All external components/libraries that are not a part of the base class libraries must have sources and sinks defined in the secure static analysis tool's source-sink-passthru database. The intricacies of some passthrus (i.e. filters) can vary by implementation or implementer, and thus almost always require custom configuration
- Additional use of certain patterns or architectural code elements could cause other variations, which could require customization that isn't possible with most modern secure static analysis tools
For the reasons above, as well as the reasons put forth in the NIST SATE studies (done by NIST SAMATE), I find it difficult to recommend many secure static analysis tools for use in white box analysis. It is almost always simply better to use code comprehension strategies that most likely involve reading the source code from top to bottom, which is really very important if you are looking for managed code rootkits and the like.
Instead of testing and auditing/assessing applications, I would take a different approach that is largely very technology-agnostic. My suggestion would be to implement an application security risk management portal that includes an app inventory along with each app's currently implemented application security controls. After an initial baseline, the application security controls should be evaluated against industry standards such as MITRE CWE, SAFEcode, and OWASP ASVS. A gap analysis (note that this is a standard risk management term and works best when implemented in an information security management program based on ISO 27001 or similar) can then be used to determine the optimal application security controls, as well as a path to get from the currently implemented controls to the required ones.
You should implement this risk management portal before performing risk assessment activity such as white-box or black-box testing in order to get better results, and to measure the success of your program.
- 18,945
- 6
- 59
- 108
-
+1 (virtually, since I'm out of votes for today...) Code review is still whitebox, whether you use an automatic tool or not (manual vs automatic was asked elsewhere). I also appreciate your points on risk management and control gaps, but this question does refer specifically to the testing - white or black. – AviD Nov 14 '10 at 22:21
-
Also btw, reading code cannot discover managed code rootkits, thats the whole point - they're in the framework and completely undetectable by the code. Au contraire, you *might* be better off blackboxing if you suspect rootkit, though all bets are off if the rootkit author is trying to hide.... – AviD Nov 14 '10 at 22:22
-
@AviD: Every piece of a system should be reverse engineered and statically analyzed with manual code review, including the frameworks. This is real world secure code review. – atdre Nov 15 '10 at 01:42
-
Wow, wait, what? 1. "real world" - its hard enough to get even the biggest, most "secure" orgs to do ANY code review, tripling the scope would make it even worse. 2. When is WB done on production servers? Otherwise, RE is pointless. 3. Do you really mean that EVERY time you review ANY application, you expect to fully review the entire framework? Which btw is usually much larger than most applications... 4. That does nothing to ensure the security of the *production* environment, since if we're talking rootkitting its a deployment issue anyway. – AviD Nov 15 '10 at 05:54
-
In integration or dev, which is cloned to prod. No testing/inspection in prod. As for scope, well, adversaries don't limit their scope, so secure orgs shouldn't either – atdre Nov 15 '10 at 10:06
-
Ah, if "shouldn't" was real world we'd be in a completely different place by now :). Unfortunately, orgs (even "secure" ones) do a lot of things they "shouldn't" do. Moreover, I'm not so sure they should do this - those security dollars would be better spent elsewhere. It comes back around to risk management... – AviD Nov 16 '10 at 06:32
-
As for checking for [managed code rootkits](http://www.amazon.com/Managed-Code-Rootkits-Hooking-Environments/dp/1597495743), you'd be better off inspecting your production servers to verify the core framework files, rather than attempting to code review the entire framework. Needle/haystack and such... – AviD Nov 16 '10 at 06:34
Some really good answers here, but a few additional points I think are important to add:
- As @atdre mentioned, it shouldn't be either / or, these are two different creatures, and they measure different things. If at all possible, you should do both.
- Also as @atdre said, testing - even whitebox + blackbox - is not enough. There are other things you need to do to be secure, including all of a holistic SDL, with proper risk management, analysis, etc.
- To the point... Blackbox is usually quicker, often by order of magnitude. Whitebox (including code review) usually requires a lot more work.
- Blackbox (i.e. pentesting) usually has a cheaper cost than whitebox, not just in total but also per-hour.
- There are more quality 3rd party blackbox providers than whitebox - not in total, but counting only those that really know what they're doing. (Or is that just my perception?)
- WB often finds much deeper vulnerabilities than BB.
- WB can often find faulty filters, that BB was not able to bypass (until knowing how the filter is constructed - another point towards gray box testing).
- There are many types of flaws that BB is not able to even test - e.g. audit logs, crypto flaws, backend hardening, etc.
- BB can test the entire system, with all the non-code protections in place (e.g. WAF, IPS, OS hardening, etc.) where WB is only on the application level (e.g. code/design etc). Note that this can go both ways - sometimes you get prevented from completing a BB scan, even though once you know the vector you would be able to bypass the protections.
- Similarly, BB can discover faulty interactions between sub-systems, where WB will usually miss it. Consider this as unit-testing vs. system-testing.
- WB can often be performed long before the system is complete, BB needs to be built, compiled, up and running (and preferably most functional bugs shaken out). This can make an SDL more efficient, when reviews can be done early in the lifecycle.
- On the other hand, if a system is already up, it's simple to start a BB, but if you want to do WB (do it right) you have to start hunting down all the source code, libraries, tools, etc. Often, you don't even have the source code because its 3rd party, COTS, etc.
- 72,708
- 22
- 137
- 218
Black box
- pros: easy, quick and simple testing
- cons: sometimes it is not possible to test some parts of application (to check hashing algorithms, session id entropy,...); you cannot be sure if whole application was tested
White box
- pros: ability to check source codes (saves time - no need to test SQL injection if you can see that parameters are used in safe way everywhere); you can test parts of application which are not accessible/testable using GUI
- cons: tests can become really complex
In general white box testing allows you to dive into the source code and perform complete penetration testing, but can be very time consuming, while black box is easy, fast and simple. I prefer gray box testing - using black box methods and interviewing developers/checking source code only on specific parts of application (authentication, session management, configuration management, ...).
- 1,870
- 13
- 22