Fuzz testing or Fuzzing is a Black Box software testing technique, which basically consists in finding implementation bugs using malformed/semi-malformed data injection in an automated fashion.
A trivial example:
Lets's consider an integer in a program, which stores the result of a user's choice between 3 questions. When the user picks one, the choice will be 0, 1 or 2. Which makes three practical cases. But what if we transmit 3, or 255 ? We can, because integers are stored a static size variable. If the default switch case hasn't been implemented securely, the program may crash and lead to "classical" security issues: (un)exploitable buffer overflows, DoS, ...
Fuzzing is the art of automatic bug finding, and it's role is to find software implementation faults, and identify them if possible.
Fuzz testing was developed at the University of Wisconsin Madison in 1989 by Professor Barton Miller and his students. Their work can be found at http://www.cs.wisc.edu/~bart/fuzz/
A fuzzer is a program which injects automatically semi-random data into a program/stack and detect bugs. The data-generation part is made of generators, and vulnerability identification relies on debugging tools.
A protocol fuzzer sends forged packets to the tested system, and a file format fuzzer generates files and opens them one after another, an application fuzzer sends forged data.
Comparison with cryptanalysis
The number of possible tryable solutions is the explorable solutions space. The aim of cryptanalysis is to reduce this space, which means finding a way of having less keys to try than pure bruteforce to decrypt something.
Most of the fuzzers are:
- protocol/file-format dependant
- data-type dependant
Because a program only understands structured-enough data. If you connect to a web server in a raw way, it will only respond to listed commands such as GET (or eventually crash). It will take less time to start the string with "GET ", and fuzz the rest, but the drawback is that you'll skip all the tests on the first verb.
In this regard, Fuzzers try to reduce the number of unuseful tests, i.e. the values we already know that there's little chance they'll work: you reduce impredictibility, in favor of speed.
A fuzzer would try combinations of attacks on:
- numbers (signed/unsigned integers/float...)
- chars (urls, command-line inputs)
- metadata : user-input text (id3 tag)
- pure binary sequences
A common approach to fuzzing is to define lists of "known-to-be-dangerous values" (fuzz vectors) for each type, and to inject them or recombinations.
- for integers: zero, possibly negative or very big numbers
- for chars: escaped, interpretable characters / instructions (ex: For SQL Requests, quotes / commands...)
- for binary: random ones
Please refer to OWASP's Fuzz Vector's resource for real-life examples.
Protocols and file formats imply norms, which are sometimes blurry, very complicated or badly implemented : that's why developers sometimes mess up in the implementation process (because of time/cost constraints). That's why it can be interesting to take the opposite approach: take a norm, look at all mandatory features and constraints, and try all of them; forbidden/reserved values, linked parameters, field sizes. That would be hand-fuzzing.
One of the important aspects of a fuzzer is that one doesn't need to look at the internals of the tested system: the systematical/random approach allows this method to find bugs that would have often been missed by human eyes.
Fuzzers usually tend to try one-level-imbrication-level attacks, which means changing only one parameter at a time. Therefore, fuzzing tools can detect trivial errors, but are less gifted with deep ones.
Another problem is that when you do some black-box-testing, you usually attack a closed system, which increases difficulty to evaluate the dangerosity/impact of the found vulnerability.
The purpose of fuzzing relies on the assumption that there are bugs within every program, which are waiting to be discovered. Therefore, a systematical approach should find them sooner or later.
Fuzzing can add another point of view to classical software testing techniques (hand code review, debugging) because of it's non-human approach. It doesn't replace them, but is a reasonable complement, thanks to the limited work needed to put the procedure in place.
Fuzzers from OWASP
Fuzzing with WebScarab : a framework for analysing applications that communicate using the HTTP and HTTPS protocols
JBroFuzz : a stateless network protocol fuzzer
WSFuzzer : real-world manual SOAP pen testing tool
Technical resources on OWASP
Fuzzing vectors 
Wikipedia article 
Fuzzing-related papers 
The ultimate fuzzers list @ infosec 
Another list @ hacksafe 
The fuzzing mailing list 
Codenomicon's product suite: