Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

🏠 Back to Blog

FFuf

FFuf (Fuzz Faster U Fool) is a fast web fuzzer written in Go. Tools such as ffuf provide us with a handy automated way to fuzz the web application’s individual components or a web page. This means, for example, that we use a list that is used to send requests to the webserver if the page with the name from our list exists on the webserver. If we get a response code 200, then we know that this page exists on the webserver, and we can look at it manually.

Understanding how ffuf works is critical for effective web enumeration and penetration testing. The following topics will be discussed:

  • Fuzzing for directories
  • Fuzzing for files and extensions
  • Identifying hidden vhosts
  • Fuzzing for PHP parameters
  • Fuzzing for parameter values

Fuzzing

The term fuzzing refers to a testing technique that sends various types of user input to a certain interface to study how it would react. If we were fuzzing for SQL injection vulnerabilities, we would be sending random special characters and seeing how the server would react. If we were fuzzing for a buffer overflow, we would be sending long strings and incrementing their length to see if and when the binary would break.

We usually utilize pre-defined wordlists of commonly used terms for each type of test for web fuzzing to see if the webserver would accept them. This is done because web servers do not usually provide a directory of all available links and domains (unless terribly configured), and so we would have to check for various links and see which ones return pages.

For example, if we visit a page that doesn’t exist, we would get an HTTP code 404 Page Not Found. However, if we visit a page that exists, like /login, we would get the login page and get an HTTP code 200 OK.

This is the basic idea behind web fuzzing for pages and directories. Still, we cannot do this manually, as it will take forever. This is why we have tools that do this automatically, efficiently, and very quickly. Such tools send hundreds of requests every second, study the response HTTP code, and determine whether the page exists or not. Thus, we can quickly determine what pages exist and then manually examine them to see their content.


Wordlists

To determine which pages exist, we should have a wordlist containing commonly used words for web directories and pages, very similar to a Password Dictionary Attack. Though this will not reveal all pages under a specific website, as some pages are randomly named or use unique names, in general, this returns the majority of pages, reaching up to 90% success rate on some websites.

We will not have to reinvent the wheel by manually creating these wordlists, as great efforts have been made to search the web and determine the most commonly used words for each type of fuzzing. Some of the most commonly used wordlists can be found under the GitHub SecLists repository, which categorizes wordlists under various types of fuzzing, even including commonly used passwords.

Within our PwnBox, we can find the entire SecLists repo available under /opt/useful/SecLists. The specific wordlist we will be utilizing for pages and directory fuzzing is another commonly used wordlist called directory-list-2.3, and it is available in various forms and sizes.

Tip: Taking a look at this wordlist we will notice that it contains copyright comments at the beginning, which can be considered as part of the wordlist and clutter the results. We can use the following in ffuf to get rid of these lines with the -ic flag.

Common Wordlists

TypePathPurpose
Directory/Page/opt/useful/seclists/Discovery/Web-Content/directory-list-2.3-small.txtGeneral directory and file discovery
Extensions/opt/useful/seclists/Discovery/Web-Content/web-extensions.txtFile extension variations
Domain/opt/useful/seclists/Discovery/DNS/subdomains-top1million-5000.txtSubdomain enumeration
Parameters/opt/useful/seclists/Discovery/Web-Content/burp-parameter-names.txtCommon parameter names

Directory Fuzzing

We start by learning the basics of using ffuf to fuzz websites for directories. The main two options are -w for wordlists and -u for the URL. We can assign a wordlist to a keyword to refer to it where we want to fuzz. For example, we can pick our wordlist and assign the keyword FUZZ to it by adding :FUZZ after it.

Next, as we want to be fuzzing for web directories, we can place the FUZZ keyword where the directory would be within our URL.

Basic Directory Fuzzing

ffuf -w /opt/useful/seclists/Discovery/Web-Content/directory-list-2.3-small.txt:FUZZ -u http://SERVER_IP:PORT/FUZZ
  • Replaces FUZZ with each word from the wordlist
  • Tests almost 90k URLs in less than 10 seconds
  • Useful for finding admin panels, backup files, configuration files, and hidden endpoints
  • Results show HTTP status codes (200, 301, 302, etc.) indicating which directories exist

Note: We can even make it go faster if we are in a hurry by increasing the number of threads to 200, for example, with -t 200, but this is not recommended, especially when used on a remote site, as it may disrupt it, and cause a Denial of Service, or bring down your internet connection in severe cases.


Extension Fuzzing

In the previous section, we found that we had access to /blog, but the directory returned an empty page, and we cannot manually locate any links or pages. So, we will once again utilize web fuzzing to see if the directory contains any hidden pages. However, before we start, we must find out what types of pages the website uses, like .html, .aspx, .php, or something else.

One common way to identify that is by finding the server type through the HTTP response headers and guessing the extension. For example, if the server is apache, then it may be .php, or if it was IIS, then it could be .asp or .aspx, and so on. This method is not very practical, though.

So, we will again utilize ffuf to fuzz the extension, similar to how we fuzzed for directories. Instead of placing the FUZZ keyword where the directory name would be, we would place it where the extension would be .FUZZ, and use a wordlist for common extensions.

Note: The wordlist we chose already contains a dot (.), so we will not have to add the dot after “index” in our fuzzing.

Before we start fuzzing, we must specify which file that extension would be at the end of! We can always use two wordlists and have a unique keyword for each, and then do FUZZ_1.FUZZ_2 to fuzz for both. However, there is one file we can always find in most websites, which is index.*, so we will use it as our file and fuzz extensions on it.

Extension Fuzzing Example

ffuf -w /opt/useful/seclists/Discovery/Web-Content/web-extensions.txt:FUZZ -u http://SERVER_IP:PORT/blog/indexFUZZ
  • Tests extensions like .php, .bak, .old, .txt, etc.
  • Common for finding backup files or alternative file formats
  • Helps identify what technology stack the website uses

Page Fuzzing

We will now use the same concept of keywords we’ve been using with ffuf, use .php as the extension, place our FUZZ keyword where the filename should be, and use the same wordlist we used for fuzzing directories.

Page Fuzzing Example

ffuf -w /opt/useful/seclists/Discovery/Web-Content/directory-list-2.3-small.txt:FUZZ -u http://SERVER_IP:PORT/blog/FUZZ.php
  • Useful when you know the directory structure but want to find specific pages
  • Can be combined with extension fuzzing for comprehensive discovery
  • Results show which pages exist within the directory

Recursive Fuzzing

So far, we have been fuzzing for directories, then going under these directories, and then fuzzing for files. However, if we had dozens of directories, each with their own subdirectories and files, this would take a very long time to complete. To be able to automate this, we will utilize what is known as recursive fuzzing.

Recursive Flags

When we scan recursively, it automatically starts another scan under any newly identified directories that may have on their pages until it has fuzzed the main website and all of its subdirectories.

Some websites may have a big tree of sub-directories, like /login/user/content/uploads/...etc, and this will expand the scanning tree and may take a very long time to scan them all. This is why it is always advised to specify a depth to our recursive scan, such that it will not scan directories that are deeper than that depth. Once we fuzz the first directories, we can then pick the most interesting directories and run another scan to direct our scan better.

In ffuf, we can enable recursive scanning with the -recursion flag, and we can specify the depth with the -recursion-depth flag. If we specify -recursion-depth 1, it will only fuzz the main directories and their direct sub-directories. If any sub-sub-directories are identified (like /login/user, it will not fuzz them for pages). When using recursion in ffuf, we can specify our extension with -e .php.

Note: We can still use .php as our page extension, as these extensions are usually site-wide.

Finally, we will also add the flag -v to output the full URLs. Otherwise, it may be difficult to tell which .php file lies under which directory.

Recursive Fuzzing Example

ffuf -w /opt/useful/seclists/Discovery/Web-Content/directory-list-2.3-small.txt:FUZZ -u http://SERVER_IP:PORT/FUZZ -recursion -recursion-depth 1 -e .php -v
  • -recursion: Enables recursive directory fuzzing
  • -recursion-depth: Limits how deep to recurse (prevents infinite loops)
  • -e: Adds extensions to discovered directories
  • -v: Verbose output for better visibility
  • The scan takes much longer, sent almost six times the number of requests, and the wordlist doubled in size (once with .php and once without)

DNS Records

Once we accessed the page under /blog, we got a message saying Admin panel moved to academy.htb. If we visit the website in our browser, we get can’t connect to the server at www.academy.htb.

This is because the exercises we do are not public websites that can be accessed by anyone but local websites within HTB. Browsers only understand how to go to IPs, and if we provide them with a URL, they try to map the URL to an IP by looking into the local /etc/hosts file and the public DNS Domain Name System. If the URL is not in either, it would not know how to connect to it.

If we visit the IP directly, the browser goes to that IP directly and knows how to connect to it. But in this case, we tell it to go to academy.htb, so it looks into the local /etc/hosts file and doesn’t find any mention of it. It asks the public DNS about it (such as Google’s DNS 8.8.8.8) and does not find any mention of it, since it is not a public website, and eventually fails to connect.

So, to connect to academy.htb, we would have to add it to our /etc/hosts file:

sudo sh -c 'echo "SERVER_IP  academy.htb" >> /etc/hosts'

Sub-domain Fuzzing

In this section, we will learn how to use ffuf to identify sub-domains (i.e., *.website.com) for any website.

Sub-domains

A sub-domain is any website underlying another domain. For example, https://photos.google.com is the photos sub-domain of google.com.

In this case, we are simply checking different websites to see if they exist by checking if they have a public DNS record that would redirect us to a working server IP. So, let’s run a scan and see if we get any hits. Before we can start our scan, we need two things:

  1. A wordlist
  2. A target

Luckily for us, in the SecLists repo, there is a specific section for sub-domain wordlists, consisting of common words usually used for sub-domains. We can find it in /opt/useful/seclists/Discovery/DNS/. In our case, we would be using a shorter wordlist, which is subdomains-top1million-5000.txt. If we want to extend our scan, we can pick a larger list.

Sub-domain Fuzzing Example

ffuf -w /opt/useful/seclists/Discovery/DNS/subdomains-top1million-5000.txt:FUZZ -u https://FUZZ.inlanefreight.com/
  • Places FUZZ in the subdomain position
  • Useful for finding hidden or forgotten subdomains
  • Often reveals development, staging, or administrative interfaces
  • Works for public domains with DNS records

Note: This method only works for public sub-domains with DNS records. For non-public sub-domains or VHosts, we need to use VHost fuzzing instead.


VHost Fuzzing

As we saw in the previous section, we were able to fuzz public sub-domains using public DNS records. However, when it came to fuzzing sub-domains that do not have a public DNS record or sub-domains under websites that are not public, we could not use the same method. In this section, we will learn how to do that with VHost Fuzzing.

VHosts vs. Sub-domains

The key difference between VHosts and sub-domains is that a VHost is basically a ‘sub-domain’ served on the same server and has the same IP, such that a single IP could be serving two or more different websites.

VHosts may or may not have public DNS records.

In many cases, many websites would actually have sub-domains that are not public and will not publish them in public DNS records, and hence if we visit them in a browser, we would fail to connect, as the public DNS would not know their IP. Once again, if we use the sub-domain fuzzing, we would only be able to identify public sub-domains but will not identify any sub-domains that are not public.

This is where we utilize VHosts Fuzzing on an IP we already have. We will run a scan and test for scans on the same IP, and then we will be able to identify both public and non-public sub-domains and VHosts.

VHost Fuzzing Example

To scan for VHosts, without manually adding the entire wordlist to our /etc/hosts, we will be fuzzing HTTP headers, specifically the Host: header. To do that, we can use the -H flag to specify a header and will use the FUZZ keyword within it:

ffuf -w /opt/useful/seclists/Discovery/DNS/subdomains-top1million-5000.txt:FUZZ -u http://academy.htb:PORT/ -H 'Host: FUZZ.academy.htb'

We see that all words in the wordlist are returning 200 OK! This is expected, as we are simply changing the header while visiting http://academy.htb:PORT/. So, we know that we will always get 200 OK. However, if the VHost does exist and we send a correct one in the header, we should get a different response size, as in that case, we would be getting the page from that VHosts, which is likely to show a different page.


Filtering Results

So far, we have not been using any filtering to our ffuf, and the results are automatically filtered by default by their HTTP code, which filters out code 404 NOT FOUND, and keeps the rest. However, as we saw in our previous run of ffuf, we can get many responses with code 200. So, in this case, we will have to filter the results based on another factor, which we will learn in this section.

Filtering

Ffuf provides the option to match or filter out a specific HTTP code, response size, or amount of words. We can see that with ffuf -h:

MATCHER OPTIONS:

  • -mc: Match HTTP status codes, or “all” for everything. (default: 200,204,301,302,307,401,403)
  • -ml: Match amount of lines in response
  • -mr: Match regexp
  • -ms: Match HTTP response size
  • -mw: Match amount of words in response

FILTER OPTIONS:

  • -fc: Filter HTTP status codes from response. Comma separated list of codes and ranges
  • -fl: Filter by amount of lines in response. Comma separated list of line counts and ranges
  • -fr: Filter regexp
  • -fs: Filter HTTP response size. Comma separated list of sizes and ranges
  • -fw: Filter by amount of words in response. Comma separated list of word counts and ranges

In this case, we cannot use matching, as we don’t know what the response size from other VHosts would be. We know the response size of the incorrect results, which, as seen from the test above, is 900, and we can filter it out with -fs 900.

VHost Fuzzing with Filtering

ffuf -w /opt/useful/seclists/Discovery/DNS/subdomains-top1million-5000.txt:FUZZ -u http://academy.htb:PORT/ -H 'Host: FUZZ.academy.htb' -fs 900
  • Uses the Host header to specify different virtual hosts
  • -fs filters out responses matching the default/known host size
  • Critical when IP-based enumeration doesn’t reveal all content
  • Often discovers internal or development sites

Note: Don’t forget to add discovered VHosts to /etc/hosts if they don’t have public DNS records.


Parameter Fuzzing - GET

If we run a recursive ffuf scan on admin.academy.htb, we should find http://admin.academy.htb:PORT/admin/admin.php. If we try accessing this page, we see a message indicating that there must be something that identifies users to verify whether they have access to read the flag. We did not login, nor do we have any cookie that can be verified at the backend. So, perhaps there is a key that we can pass to the page to read the flag. Such keys would usually be passed as a parameter, using either a GET or a POST HTTP request.

Tip: Fuzzing parameters may expose unpublished parameters that are publicly accessible. Such parameters tend to be less tested and less secured, so it is important to test such parameters for the web vulnerabilities we discuss in other modules.

GET Request Fuzzing

Similarly to how we have been fuzzing various parts of a website, we will use ffuf to enumerate parameters. Let us first start with fuzzing for GET requests, which are usually passed right after the URL, with a ? symbol, like:

http://admin.academy.htb:PORT/admin/admin.php?param1=key

So, all we have to do is replace param1 in the example above with FUZZ and rerun our scan. Before we can start, however, we must pick an appropriate wordlist. Once again, SecLists has just that in /opt/useful/seclists/Discovery/Web-Content/burp-parameter-names.txt. With that, we can run our scan.

Once again, we will get many results back, so we will filter out the default response size we are getting.

GET Parameter Fuzzing Example

ffuf -w /opt/useful/seclists/Discovery/Web-Content/burp-parameter-names.txt:FUZZ -u http://admin.academy.htb:PORT/admin/admin.php?FUZZ=key -fs xxx
  • Fuzzes parameter names in GET requests
  • Useful for finding hidden functionality, API endpoints, or vulnerable parameters
  • Filter by response size to exclude default error pages

Parameter Fuzzing - POST

The main difference between POST requests and GET requests is that POST requests are not passed with the URL and cannot simply be appended after a ? symbol. POST requests are passed in the data field within the HTTP request.

To fuzz the data field with ffuf, we can use the -d flag, as we saw previously in the output of ffuf -h. We also have to add -X POST to send POST requests.

Tip: In PHP, “POST” data “content-type” can only accept “application/x-www-form-urlencoded”. So, we can set that in “ffuf” with -H 'Content-Type: application/x-www-form-urlencoded'.

So, let us repeat what we did earlier, but place our FUZZ keyword after the -d flag:

POST Parameter Fuzzing Example

ffuf -w /opt/useful/seclists/Discovery/Web-Content/burp-parameter-names.txt:FUZZ -u http://admin.academy.htb:PORT/admin/admin.php -X POST -d 'FUZZ=key' -H 'Content-Type: application/x-www-form-urlencoded' -fs xxx
  • -X POST: Specifies POST method
  • -d: Sets POST data with FUZZ placeholder
  • -H: Sets required headers (Content-Type is often needed)
  • POST parameters often handle authentication, file uploads, or sensitive operations

Testing POST Requests Manually

We can test POST requests with curl to verify they work before fuzzing:

curl http://admin.academy.htb:PORT/admin/admin.php -X POST -d 'id=key' -H 'Content-Type: application/x-www-form-urlencoded'

Value Fuzzing

After fuzzing a working parameter, we now have to fuzz the correct value that would return the flag content we need. This section will discuss fuzzing for parameter values, which should be fairly similar to fuzzing for parameters, once we develop our wordlist.

Custom Wordlist

When it comes to fuzzing parameter values, we may not always find a pre-made wordlist that would work for us, as each parameter would expect a certain type of value.

For some parameters, like usernames, we can find a pre-made wordlist for potential usernames, or we may create our own based on users that may potentially be using the website. For such cases, we can look for various wordlists under the seclists directory and try to find one that may contain values matching the parameter we are targeting. In other cases, like custom parameters, we may have to develop our own wordlist.

In this case, we can guess that the id parameter can accept a number input of some sort. These ids can be in a custom format, or can be sequential, like from 1-1000 or 1-1000000, and so on. We’ll start with a wordlist containing all numbers from 1-1000.

There are many ways to create this wordlist, from manually typing the IDs in a file, or scripting it using Bash or Python. The simplest way is to use the following command in Bash that writes all numbers from 1-1000 to a file:

Creating Sequential Wordlists

for i in $(seq 1 1000); do echo $i >> ids.txt; done

Value Fuzzing Example

Our command should be fairly similar to the POST command we used to fuzz for parameters, but our FUZZ keyword should be put where the parameter value would be, and we will use the ids.txt wordlist we just created:

ffuf -w ids.txt:FUZZ -u http://admin.academy.htb:PORT/admin/admin.php -X POST -d 'id=FUZZ' -H 'Content-Type: application/x-www-form-urlencoded' -fs xxx
  • Fuzzes parameter values instead of names
  • Useful for finding valid IDs, usernames, or other identifiers
  • Can reveal authorization flaws or information disclosure
  • Often requires creating custom wordlists based on the parameter type

Key Options Summary

OptionDescription
-wWordlist file path and (optional) keyword separated by colon. eg. ‘/path/to/wordlist:KEYWORD’
-uTarget URL
-HHeader "Name: Value", separated by colon. Multiple -H flags are accepted
-XHTTP method to use (default: GET)
-bCookie data "NAME1=VALUE1; NAME2=VALUE2" for copy as curl functionality
-dPOST data
-recursionScan recursively. Only FUZZ keyword is supported, and URL (-u) has to end in it. (default: false)
-recursion-depthMaximum recursion depth. (default: 0)
-eFile extensions to append
-vVerbose output
-tNumber of concurrent threads (default: 40)
-mcMatch HTTP status codes, or “all” for everything. (default: 200,204,301,302,307,401,403)
-msMatch HTTP response size
-fcFilter HTTP status codes from response. Comma separated list of codes and ranges
-fsFilter HTTP response size. Comma separated list of sizes and ranges
-flFilter by amount of lines in response
-fwFilter by amount of words in response
-icIgnore comments in wordlist

Core Takeaways

  • FFuf is fast and efficient for web content discovery, testing almost 90k URLs in less than 10 seconds
  • The FUZZ keyword is the core mechanism for replacing values from wordlists
  • Filtering responses is essential for reducing noise, especially when many results return 200 OK
  • Different fuzzing types target different attack surfaces (directories, parameters, values)
  • VHost fuzzing is critical when multiple sites share an IP and don’t have public DNS records
  • POST parameter fuzzing often reveals more sensitive functionality than GET
  • Recursive fuzzing automates discovery but should be limited by depth to prevent excessive scanning
  • Custom wordlists are often needed for value fuzzing based on the parameter type
  • Always verify discovered endpoints manually before proceeding with further enumeration