The Anatomy of a Pentest Finding

RedWedgeX
9 min readJul 15, 2021

--

This article is (hopefully) part of a series on penetration test reporting. The goal of this entire series is to present information to penetration testers on how to better enrich your reports and provide a better value to the customers. Specifically, this article discusses the detailed findings section of a report.

These guides are designed to be a reference manual, thought kickstarter, and informational tool, NOT to be process-driven and prescriptive in nature. It’s also written in such a way as to be tool and process agnostic. The overarching goal of this series is to make communication between a pentester and the customer they represent:

  • Clear (Easy to understand)
  • Standardized (The same product from multiple different testers)
  • Industry-Based (Not “the same as everyone else”, but taking standards and ideas from “everyone else” helps set our baseline!)
  • Value-Driven (Ensure testers provide a usable, valuable product to the customer)
  • Professional (Not make us look like amateurs)

The Anatomy of a Pentest Finding

For the sake of clarity, findings should be presented in the simplest way possible while still containing all the required information. Regardless of the reporting mechanism (Jira, Faraday, detailed PDF report, etc.), each finding should be able to be digested as a standalone product. Often, the report itself will be broken down, and individual findings will be given and assigned to the developer, system administrator, or technical wizard who will remediate the issue. This is often a testers only chance to communicate DIRECTLY to those people. You might notice that I lifted some of this, especially some of the wording in the examples, directly from sources like OWASP, Vulners, etc. THAT’S TOTALLY OK! You should always consider that somewhere an expert who knows more about something than you has already written up a bunch of stuff on a topic, and in the context of a pentest report, it’s ok to use their expert work! (See: SWIPE!)

You’ll also note that I’m intentionally not using the word “vulnerability” here. Why? Because not all findings are vulnerabilities, but all vulnerabilities are findings.

When is a Finding Not a Finding?

A finding is the identification of a specific security issue that you, as the tester, discovered. At a high level, a finding should always:

  • Be in-scope (Don’t report findings for endpoints, hosts, ranges, etc. that weren’t identified in your original scope)
  • Be actionable (There’s always a solution — The product should be fixed/solved/remediated/isolated/worked-around/risk-accepted/etc.)
  • Be relevant to the goals & objectives of the test (If your stated goal is to test cross-communication between in-scope hosts, reporting on web app vulnerabilities fails to meet the original request from the customer)
  • Be Accurate (This should go without saying — your finding should actually be true and provable, and your recommendations grounded in fact not conjecture)
  • Focus on making the product more secure (A pentest finding isn’t the place to point out PEP8 noncompliance, poor documentation, slow network traffic, etc)
  • Be unique (If your organization already has a method of reporting automated scan findings, there’s no need for you to report things it found. 20 reports full of “TLS 1.0 Enabled” when Nexpose or Nessus are already alerting the customer to those things are unnecessary.)

If your finding doesn’t line up with ALL of these, then it’s not a finding! Feel free to tell the customer, mention it elsewhere in the report, bring it up in a staff meeting, send an email, etc. But at the most basic level, if it’s NOT those things, don’t report it as a security finding.

Pitfalls

Document your findings as you go!

The best time to capture the information required for a finding report is right when you find it! Some testers prefer to wait until post-engagement to fully document and flesh out their findings, but failing to document when it’s “fresh” can lead to the loss of information (because you forgot!) or loss of screenshots, recreation steps, or proof (because you don’t have access to the environment anymore or they changed something!). When it comes to documenting your findings, THERE’S NO TIME LIKE THE PRESENT.

Shoehorning

Hey, I’ll let you in on a little secret — IT’S OK TO SUBMIT A REPORT WITH ZERO FINDINGS. While you may think you “failed” or are providing an inferior product, you’re actually demonstrating the product is secure! So in that vein, don’t report a finding just because you think you don’t have enough. Informational findings, for example, are a tricky thing. It’s totally ok to tell the customer something you want them to know in other places in the report or outside of the reporting process itself, but if you can’t directly relate it to the security of the product, it’s NOT A FINDING. Don’t try to shoehorn a quasi-related thought into a finding just for the sake of having a finding.

Proofreading

Check your work before submitting to the customer or peer review! Review grammar, punctuation, and clarity. Doing it right the first time means less time in the review phase where people are wasting their time on something that should have been done right the first time.

Templatizing & Reuse

While it’s totally ok (and recommended) to reuse write ups of previous findings (or from findings databases like https://vulners.com), you should always make sure the information is directly applicable to this particular finding. Don’t just cut and paste word for word. Take the time to analyze a specific finding and provide your input. You have this job because you’re a trusted expert in the field. Additionally, you should avoid templatizing each part of your finding. (But it’s ok to templatize the entire finding by section.) For example, it’s not ok for all of your findings to read:

On — SITE–, a — TYPE — vulnerability was discovered. Recommend remediating this vulnerability in accordance with — Reference–

If your finding looks like it could be auto generated by a simple script, then what’s the point of having real humans do the work? The reason we’re here is to think. Analyze. Convince.

Parts of a Finding

The below components should be in EVERY finding:

Name/Title

This is a heading for the finding. A strong title is a mix of where the vulnerability occurs, domain or endpoint, and the type of vulnerability. It should be descriptive enough that the product owner can immediately get an idea of what you’re reporting and possibly deduce criticality. Imagine an aggregate of these names being passed around a table on an excel spreadsheet for assignment, and you’ll quickly understand the need for short (yet succinct) titles for a finding that accurately conveys the vulnerability or issue found.

Examples of Poor, Better, and Best Findings Titles

Severity Ranking

You should always include severity metrics with every finding. This should NOT be an arbitrary guess to severity, but a standards-based metric that takes into account the actual vulnerability, business risk, and mitigation (for example, CVSS v3). The method of determining severity should be a standard set within the pentest team, and not vary from tester to tester or engagement ot engagement. The severity ranking section should consist of:

  • “Friendly” ranking (CRITICAL, HIGH, MEDIUM, LOW, INFORMATIONAL, etc.)
  • Numeric Score (5.4, 9.6, etc) or specific risk metrics.
  • Calculations — How did you get this number? Do you have a risk assessment rubric you used (show it here)? Are you using CVSS? (Vector string here!)
Examples of good and bad severity rankings

Summary

The summary should be one to two paragraphs explaining the issue, where you found it, and why it’s bad. It should be concise and relay the information accurately and clearly. Instead of long, drawn out soliloquies about the vulnerability, how it works, the root cause, the technical details, etc. use links to reference material where the reader can go to get more information. Links to a CVE, Patch Advisory, OWASP, CWE, or other technical resources are perfectly appropriate here.

A summary should include:

  • The endpoint/host/affected component
  • The vulnerability exploited/detected
  • A description of the vulnerability (with emphasis on answering the question “why is this bad FOR THE CUSTOMER?” What’s the business risk if this gets exploited?)
  • A screenshot of the “proof”. Ensure this is clear, easy to read, and accurately shows ALL the information needed (possibly including URL, request/response, timestamp, execution, etc.)
  • References — These should provide the reader a better understanding of the vulnerability, how it works, and how to fix it. NOT a walkthrough on how to perform the exploit or a link to the proof-of-concept or exploit script!
Examples of Summaries

Steps to Reproduce

This should be a step-by-step walkthrough of how you were able to attack the system. Basically, another pentester (or even the customer themselves) should be able to use this like an instruction guide to be able to exploit the system on their own without much additional prior knowledge. If it comes down to an argument later on whether or not something is or isn’t vulnerable, this section can be used by any tester to re-check and verify the finding. If the customer remediates the finding, you or another tester can use these steps to verify. Also, a record of these steps can be used in the future to ensure the finding wasn’t accidently reintroduced into the system in a future release. You can also provide an actual script here that will test it automatically or a crafted URL to click to perform the exploit, but those things should not replace detailed step-by-step instructions (WITH pictures!)

(Note: For the sake of brevity, the best option isn’t shown here, but imagine a multi-page, step-by-step, ELI5 walkthrough with detailed instructions and a screenshot of every step, and you’re on the right track!)

Examples of Steps to Reproduce

Recommendations for Remediation

This is your chance as a technical expert to tell the customer what you think they can do to fix the problem. Now, of course you’re not an expert sysadmin or engineer who knows everything about every technology out there, and you’re not expected to be. But if you are reporting it because you were able to hack it, it’s reasonable to assume that you can discuss what would have STOPPED you from hacking it. Also, it’s totally ok here to cite references or SWIPE recommendations from published security bulletins, vendor notices, or any other source. But in general, this section should contain enough information for the customer to be able to fix the problem:

  • An ACTIONABLE recommendation (If it’s not something they can DO, there’s no point in even reporting it. An action can be a fix, workaround, isolation, or even at a worst-case an risk acceptance.)
  • Detailed info on how to fix it, or links to references that have the same.
  • Long-term strategies to prevent similar issues from happening (The more we do to train customers on how to prevent their own issues, the less work we have to do in the future!).
Examples of remediation recommendations

--

--

RedWedgeX
RedWedgeX

Written by RedWedgeX

Hacker. Dad. Husband. Geek. Reluctant Developer. US Army vet. @CSULB instructor, @CactusCon crew, Perpetual Complainer. (he/him, opinions are my own)

No responses yet