fbpx

Bug hunting, for fun and profit. My slightly but not so technical how to guide for anyone.

Bug hunting, for fun and profit. My slightly but not so technical how to guide for anyone.

This article reflects not only how I like to do bug bounty programs, but also how I approach most of my normal penetration tests, red teams or web security assessments. It works well for me and many clients I’ve served have been helped by it. It might very well be not your exact style. I try to show here the way I like to think, giving you a brief peek into my hacker mind.  

It won’t be one of those many articles about how each button works for a certain tool that I like to use. If you are after that, google the term “how to use (insert name of tool)” instead. This article is mainly about the methodology to get to your goal and how to show your client that you’re worth it! 

I will try to cover all the elements that I believe are important for bug bounty or normal testing. I will get started with the initial reconnaissance, after that we will start weeding out the interesting data from the crap. Third we will go into depth, getting to know the program and interact with it. Then we go and find those damn bugs. After that, we are going for kill by confirming the issue and making a sweet undeniable proof of concept. What follows is the report towards the client, and finally we make sure the bug isn’t ignored and we do careful follow up. 

It’s a lot to cover, so if you are ready for the journey through ACME CORP, Let’s go, we’ve spent enough time already! 

Your setup, what you need for bounty hunting. 

Honestly, all you really need to start is a computer with a browser and a proxy like fiddler or Burpsuite. My personal favorite is the pro version of burp, for 320€ a year you get so many amazing features extra on top of the free, it’s well worth it. I understand if money is an issue, so the free version also works fine.  

This is all you need to start. We will talk some added value tools later on, but those are all for comfort and ease of use rather than really needed.  

It may come in handy to have a version of the OWASP ASVS standard and testing guide on hand. This guide explains to you what each web issue is and how you should test for it.  

Learning the concepts of dns and sub domains will come n handy for many programs that have a wildcard sub domain listed. You will be allowed to search for all kinds of additional domains, so knowing how these work is considered beneficial. 

FAST; First Action Steps to Take 

  1. Install burp or fiddler and play with it, get to know the tool and google for help if you need it. There are amazing videos on YouTube that provide excellent help 
  2. Get the ASVS and spend some time reading through it, you don’t have to memorize it literally… 
  3. Study the concepts of dns and sub domains for use later   

Initial reconnaissance, knowing what’s out there! 

Once you’ve signed up with a program like bugcrowd or hackerone, you are overwhelmed with options and programs to choose from. There are waaaaaaaay to much choice, so you’ll start with everything together at the same time… That’s probably not the best decision you could make, I know because I did that too…   

To much information, to many sites and any things to dig into. Low hanging fruits are usually long gone, so if you want to find some bug you’ll have to dig deep and map everything carefully. This can be a long a potentially dreadful process, but it’s needed to know what you’re up against later on. Money doesn’t come cheap. 

Scoping, watch what you’re doing!

Once you’ve chosen which program your are going go to spend this week on, read the description and scope carefully. It is really important to do this, of you don’t I’ll guarantee you that you will be super exited to submit your first bug only to be disappointed by a decline due to an “out of scope” message. I’ve been there and done that… 

As the CISO from Lyft, Mike Johnson, pointed out to me, this message wasn’t to clear in the process and should have a better explanation. There is a big difference between a project like Red Teaming, where I explicitly want no boundaries and a carte blanche for the red teamers, and bug bounty hunting.

In the scenario of bug bounty hunting, you’re going to have to comply with the boundaries set by the company offering the bounty. There are many reason why they have set this scope. Sometimes it can be that the out of scope items are not inside their capabilities, it can also be something else.

Comply with the scope, you’ll be punished and demotivated immensely by the out of scope messages after you’ve invested a lot of time and effort.

But, it can be that you stumble upon a high risk issue in a system which you’re not sure about. Best thing would be to write an email to the company asking about it, and not submitting it directly into the program. Worst case they will reply to you with a referral where to submit it to, best case you’kk be getting a bounty after all.

Ok, so we know then scope and which types of issues are allowed to be reported. Let’s start our recon steps. We have learned that ACME has some IP addresses listed as well as some websites with a wildcard sub domain, awesome! 

Infrastructure analysis  

This means we start by booting up our machines and do a full nmap on the IPs that are listed. I prefer to do tcp and udp separately, as udp has a tendency to be extremely slow and inaccurate.  

Nmap has a ton of options you can work with. My own preference is nmap -vv -Pn -sSV -pT:1-65535 -T4 -A –script-args=”timelimit=5″ –host-timeout=180m –script=default,banner,discovery 

This will run a series of tests on the host, and the verbose output allows you to analyze it easily. Don’t forget to save the data with -oA (file name). I like to hear your preferences too! Comment them below please! 

Web application enumeration 

If there are websites, you will usually get an indication with the server type in use. I still love nikto, although it’s very noisy and it’s notorious to crash things. Once in a while you get very nice results to further work with. What do you like to use?  

The alternatives for looking for known files, folders and default issues are free vulnerability scanners. These often include a set of common files and folders, but might be intrusive when configured wrong. Dirbuster is a nice one you can customize easily and can give you very interesting results.  

These tools however don’t look at the core web application, and will usually find you low hanging fruit stuff that should have been reported already. However, they are still very useful for getting yourself a decent blueprint of the company you’re hacking and trying to help improve. 

Spidering 

The next step would be to spider the web application and getting to know more about its inner workings. I really like the way burp handles it’s spider, but I also often get lost in its results or it crashes because it consumes to much memory or so (I’m on a 64gb ram machine…) 

There are many other spiders that can help you too, it’s a personal preference which is used to get to know your enemy. You can also do it by hand and walk your way around the website while you have the proxy in between. This is especially convenient when you do authenticated research. Meaning you have an account that you work with. 

Burp holds history nicely and creates a nice tree that you can work with in repeater or intruder later on. 

Subdomain scanning 

Sub domains are interesting. Burp had the option to create scopes with wildcard, so that you can create a tree with the connected subdomains. Often this is nice, but it doesn’t mean you get them all.  

A dnscan or dnswalker would help you further. What you’ll need for this though is a word list that contains all possible subdomains. As people are creative, it’s near impossible to have a full set, but with dedication and google you can get a long way. Many people have attempted to make lists of these subdomains. 

The python library for certstream is a nice recommendation to look into if you know how to code, this watches new ssl certificates that are published and contain the subdomains that they are issued for. 

Once you have performed all these steps, you will have many gigabytes of data. It’s time to start cutting the good from the bad stuff… obviously there are many many many other tools that will help you with these steps, I’m not able to cover all of these, you get the idea I hope 🙂

FAST; First Action Steps to Take 

  1. Choose your battle, pick one and stick to that for a while. Start with a small company first 
  2. Read the terms and conditions carefully, look what’s allowed and what isn’t.  
  3. Use the right tools to create your personal blueprint. Store everything and make notes along the way.  

Weeding your precious data 

You will have a ton of data by now, most of which is bad stuff. Especially spidering with burp gets you a lot of useless stuff which we have to remove.  

From infrastructure side of things, look at the ports that are open. Are these common ports, strange ports or a shitload of ports that don’t make sense?  

The best way to identify if a port is really open is with nc. This is a tool used commonly to get and create shells, so more offensive. When you do “nc -nv 10.0.0.5 445” it tries to create a raw connection to that IP on port 445. When it’s open it will show you this due to the verbose output request (v).  

When you have detailed version information, it’s worth to google or exploits on that version or if your in kali you might want to use something like searchsploit to see if exploit-db has something against it. A lot of results means something interesting and worth checking up on later. 

The stuff you usually want to ignore are outdated versions for Apache and php etc, unless you have a really trustworthy exploit and you can actually demonstrate the issue is inside the sever you are researching. Theoretical issues will almost always be declined on these “banner style issues” 

Information from scanners and tools should never be trusted and sent immediately as foul proof data towards clients and thus bug bounty programs. You will have to validate the issues manually in a tool like burp and in the browser when you have cross site scripting for example. So wait on reporting things that you see appear in the data that you have on hand now. 

You can safely remove images, style sheets and hard coded, non interactive pages from your spider results. This might clean it up a bit. You can do this with filters inside burp. 

The last thing that you do is make a long list of possible and interesting targets and locations based on the blueprint data that have. Use a scoring model that understand and score the information based on how promising it looks.  

It doesn’t matter if you are wrong later on, it is nearly used for the next steps, following up. If you don’t make such list, the chance of forgetting about these interesting issues is inevitable. 

The items on the long list have to be small chunks of information, one function or small part of a web application. This can for example be: store something, upload something, search something, select from a list with an api call to the backend that populates the list 

FAST; First Action Steps to Take 

  1. Do not trust your current data, weed out and validate before moving on 
  2. Use exploit databases and vulnerability sites to check for issues in discovered services to estimate the likelihood of problems  
  3. Make a long list of interesting targets and locations based on your reconnaissance . Ensure these are small functional features and not large websites as a whole.  

Getting down in the rabbit hole 

This is where the real fun starts. This will be a very very very very irritating, annoying and iterative process. The long list that you have created will be the base for this.  

We will take one example to demonstrate how to dig deeper and isolate an issue, but you’ll be doing this for each and every bullet point on this list.it will be a long an tedious process, but eventually it will be well worth your time, I promise! 

Let’s assume that we have found a website that uses an api to get and send data to the server in the background. The website uses some jquery so that you never leave the page. We have identified that getting the list with user data options we have available is based on an api call that uses our userid in the field as well as a typeid.  

This is a very interesting API call, it might be that these fields are vulnerable for a variety of problems, ranging from sql injection to an authorization bypass. We will have to validate that this indeed is the case. 

The api call is recorded in our proxy, so we can send the full request to the repeater to start playing and tempering. First we ensure the call works normally by hitting the send button. The result indeed is a nice json array with our user information, nicely displayed on out screen. 

Once we have this, we can start to fuzz the parameters inside the api call. When here are numerical values, try negative ones, change the numbers and also figure out what the maximum and minimum values are that the system allows. Can you overflow something, causing an error? This error might disclose information. Also, try to use alphanumerical values. Text in an integer parameter is always fun. Try sql injection options, etc. You get the idea. 

You can do many of these things automatically with scanners, but these have the tendency to crash and not find everything. Watch carefully when you change numbers for example. We have a typeid and userid. When we change the userid, we get different information. A scanner doesn’t care, as it can’t interpret the data rightfully. It looks like we can enumerate all users in the system by changing the userid inside the api call, awesome! This is a typical authorization and business logic bypass, a really great finding on its own, well worth a report! 

It doesn’t end there though. While we can rush off and report this issue right away, we can always try to increase the severity. We have to be logged in now, so the overall risk is lower compared to an anonymous attempt.  

We have to try several different things to fully explore the possible impact of the problem. If we remove the session cookie from the request, we can try to get this information as an anonymous user. Secondly, as it’s an api call that gets json data as a result, we can try to make a csrf issue, with an xhr request. If we are allowed to do this, we can leverage this issue through an attack through a third party website.  

If we can trick a user to visit our page while they are logged in, we can pull down all user information leveraging their account. This makes the problem more severe, as you can leverage the authorization bypass through a cross site request forgery. It means the business risk has increased massively now, as you’ve been able to combine two issues into one! 

FAST; First Action Steps to Take 

  1. Work organized and meticulous on your crafted list, going issues one by one 
  2. Work manually on authorized environments, selecting your possible attacks vectors based on the type of requests you are analyzing. For example, cross site scripting is not possible when you get json data as a response without a page this data is processed in.  
  3. Analyze the entire risk chain, identify likelihood and impact in meticulous detail  

Houston, we’ve got a problem, please gather your proof! 

Once you have found an issue that is in scope and looks to be real, congrats! It’s vital now to collect proof the convince the client that the issue is real and should be fixed (and rewarded). Make sure know what you’ve done and confirmed the issue a few times from different browsers and different computers if you can to ensure the problem exists broadly. This is important for browser dependent issues like cross site scripting. Not all browsers behave the same.   

In order to do this, you will need to capture screenshots of the issue, in code and in a way that shows the exploit. My personal favorite are videos. With a video you can narrate over the it explaining what did. This gives the person who has to fix it a clear view of what you are doing and where the problem possibly exists.  

I like the tinytake video editor, as you can easily create screen captures of a window size you prefer. I edit usually with an adobe product like premiere and after effects, but any program will do. 

If you don’t like a video, pictures are just as fine. You will need several images that show you from start to finish what you are doing. Naming the images is important, the reader will have to be able to discover your time-lapse, so it’s a good practice to number your images, starting with 01 if you need more than 10.  

You will be referencing the images during the reporting phase in the bug bounty program. 

FAST; First Action Steps to Take 

  1. Validate and verify you issue exists in multiple browsers and systems. 
  2. Make a detailed video where you explain the issue clearly while demonstrating the bug you’ve found 
  3. Include images that you’ve numbers in chronological order so that can reference them properly in your report.   

Report for duty sir! 

Reporting is actually the most important part of the entire process. After all, if you can’t explain what you’ve found, how can you expect the client to understand what you’ve done and award you accordingly? 

Most bug bounty programs have some sort of template available that they want you to follow. I strongly recommend to use this and don’t deviate from this style. 

You will have to provide a summary, a full description, how to reproduce the issue and the overall impact usually. 

A summary is short and to the point. Don’t create a lengthy complex story here, within one or two sentences the reader will have to get an idea what the issue will be about. Don’t use complicated words or technical lingo, it’s won’t make the message easier to understand (usually). 

The full description is where you can go beserk if you’d like, but I tend to keep it light as well. I just usually induce a little background on what I was doing and how I stumbled upon the problem. Unless you’ve found a totally new problem that nobody had ever heard about, chances are that the reader knows what the issue is. 

The most important part is the reproduction part of the report. Will have to be crystal clear. Not nailing his will fail your report and your chance of a higher bounty than normal. When the reader will have to do extensive work themselves to figure out what on earth you’ve been doing, they won’t be happy with your report and reward low. 

I always make bullitpoint style summaries, where I go over the reproduction step by step. Start from the top, include login steps if needed and follow al lithe steps to come to the point where your issues lies. Usually api calls are not directly available and you’ll need to do a few steps before e at that point. Make this clear in this part. 

The impact or risk is the last part. These is important, as many researchers like to see their issues as critical, highly critical, superlative major critical… they usually aren’t. Be realistic and use a CVSS calculator to accurately assess the risk. Explain clearly what you think can happen and include why with clear arguments. 

I generally like to use a risk = likelihood x impact type of approach during the report phase. How likely is it that someone without an account can do this? How much effort did it take you to find it? What can an attached do with this bug? These are the type of questions you’ll have to ask yourself. I will be writing an article on real world risk assessing later with an easy to follow guide. 

FAST; First Action Steps to Take 

  1. Keep it simple stupid! Don’t write overly complex stories 
  2. Make a really clear step by step reproduction scenario for your reader 
  3. Don’t overestimate your issue in terms of risk, keep it real! 

Fix that shit. Please… 

Now that you’ve submitted your awesome report, the waiting game starts. The first step is to have your issue triaged by someone. This means that the client, or the bounty program, has to validate what you submitted. 

This step is why it’s essential to have crystal clear reproduction steps in your report, with preferably a video. Without the possibility to reproduce, your report will never be accepted. 

After the validation, it will depend on the client is they award you before or after they’ve fixed it. As we speak, I’m still waiting on bounties from United on issues that I’ve reported 7-8 months ago… there are big differences on a bug inside a third party application or an application that the company made themselves. 

A gentle reminder isn’t bad in itself, but don’t overdo it. If you haven’t heard anything after a week, a request for an update is ok. If they told you they are fixing it, you can request a status update after a month or so.  

Once you’ve had your bounty and they gave you credit, you can request the issue to be made public, if they agree, you can share your experience, but don’t humble brag. Don’t claim awesomeness just for the sake of it, nobody like this, really. 

Last, very essential part, do not (ever) disclose the issue you’ve found to the public without written approval of the client. If the issue is still present, allow they client to fix it. You will not be invited to private programs if the client cannot trust you… 

FAST; First Action Steps to Take 

  1. Be gentle in your requests for updates, don’t push or threaten anyone, ever 
  2. Create a write up for complex issues so that others can learn 
  3. Never, I mean never, disclose issues to the public without approval.  

Pfff, we’ve made it, awesome! 

This was a very long article, I’m sorry! I tried not to go into to much depth technically on purpose here. I know there have been requests on how to test API’s. I will write a technical article on that as well. It’s not the testing which is harder, but the exploitation part which is slightly more tricky.  

I hope the contents and tips in here are helpful for all of you. Please let me know what you know by leaving a comment! Please share the article so that many others that I can’t reach directly can also benefit from it! 

The tl;dr for this article: 

  1. Use a good proxy like burp or fiddler and understand the type of bug your might encounter within the project you’re testing 
  2. Recon is key. Make a list of interesting endpoint and prioritize. It’s a numbers game, invest time and dedication on a program, don’t switch to fast (like I did) 
  3. Weed out your data, work your way through them in an organized manner so that you know what you did later on! 
  4. When you suspect an issue, confirm and explore all possible ways of exploitation. This will help you get a deep understanding of what’s wrong 
  5. Make a video of your working exploit, showcasing what you’ve done and how you’ve done it 
  6. Prepare a detailed report with crystal clear reproduction steps so that your client understands what they have to do to replicate the problem  
  7. Don’t over estimate your risk. Keep it real! 
  8. Don’t disclose your findings without clear approval from the client. 

Leave a Reply

Your email address will not be published. Required fields are marked *