The Joys of Waiting for Tools Part II
Sometimes the joys of waiting for tools is you simply can't wait. No matter how happily the tools will take care of the grunt work without getting tired or bored, sometimes you can't let them.
Time is rarely on your side when it comes to work and sometimes the schedule just won't allow waiting for the tools. When it comes to testing like dynamic analysis of a website, sometimes the schedule further restricts your time so you do not impact other dev or testing work. You probably do not have exclusive access to the testing environment after all and rapid, automated testing of a website can have a significant impact. You might be restricted to after-hours scanning. You will probably have to finish the testing by a certain date and maybe the tool just can't meet that deadline given the other constraints.
As good as the software security tools can be, they still lack a lot of smarts and judgment. We humans can know when we are approaching a deadline and adjust our approach accordingly. We can change priorities. We can say we've seen enough of this and decide to concentrate on that. We can decide if a certain type of testing is providing a lot of false positives we should focus on testing that provides true positives. We can decide to call an issue systemic if we see the same results over and over and over again so there is little reason to keep testing for it if we need to spend the time we have elsewhere.
The tools can't do that. They just chug merrily along doing the drudge work. The same thing over and over and over no matter how long it takes. The very thing that makes them great at doing that tedious stuff can make them very bad at meeting a deadline.
So we have to do it for them. We have to provide that judgment for the tools.
We know the schedule and need to judge how the tool is progressing against that schedule. If it isn't going to make it, we need to make the changes the tool can't make on its own. Lots of false positives on one test, turn it off. Lots of true positives on a particular test, call it a systemic problem and stop testing it so there is time for other tests. Have a web page with a whole lot of fields on it to be tested, exclude it from the main test, and do it separately so that one page doesn't consume too much of the testing time. Maybe the website is on an under-powered server and on a slow connection and reducing the number of testing threads may actually speed things up. If the tool seems to think its session has expired and keeps logging on, maybe its in-session testing relies on a part of the GUI that the testing bypasses and you need to switch to out-of-session detection so it stops wasting time logging in when it doesn't have to.
Sometimes we do not have the luxury of waiting for the tools to do their jobs. Since the tools don't get bored, they don't watch the clock. We can and we must. We know how to take shortcuts and can make the judgment call when we know it is necessary. We poor, easily bored humans need to provide the judgment the tools can't.