Skip to Main Content
All Trimarc services are now delivered through TrustedSec! Learn more
July 22, 2025

Why is this Finding on my Pentest Report?

Written by Joe Sullivan
Penetration Testing

Understanding the Value of Findings Clients Often Question

In some report readouts, we may encounter situations where a client looks at a web application report and asks, “Why are you even reporting that?” This blog explores why we include certain findings that, from a client’s perspective, may not seem severe or even relevant, but still have value from a security and consulting standpoint.

From a consultant's perspective, when we’re engaged in a web application assessment and we’re not finding critical or high-severity vulnerabilities, we may still report on best practice or low-severity issues. This isn't filler. It’s about delivering value, even when there's little to no obvious threat. No consultant wants to deliver a report that simply says, “We couldn’t find anything.” In most cases, we can identify some best practice gaps or minor vulnerabilities worth noting.

Now, if we do find multiple high-severity issues, those understandably become the focus of the report. But depending on the scope and length of the engagement, we may still include other, less-severe findings when appropriate.

Why Some Findings Stay in the Report

One reason clients may question certain findings is that we often don't have full visibility into their organization's security culture or threat model. As consultants, our default stance is conservative; we assume you're hiring us to uncover and report on as many valid issues as possible within the scope of the engagement.

It’s common for us to present conservative findings to organizations with a more liberal or risk-tolerant approach. If those findings feel out of place or excessive to you, that’s something we can absolutely adjust, after the fact, or ideally during the engagement planning phase.

But It’s in Dev/Staging...

Another common objection: “That’s just in the dev or QA environment.” While it’s true that development systems often carry different threat profiles, findings in those environments may still be relevant, especially if the systems are Internet-facing. If your staging or QA instance is publicly accessible, attackers can reach it too.

Once an attacker gains a foothold, they can often pivot internally. From our standpoint, any Internet-exposed system is fair game and should be hardened appropriately.

Examples of Findings That Get Pushback

One of the most common moments of pushback during readouts comes when clients see findings they don’t consider important. Things like autocomplete being enabled on sensitive fields often get dismissed as minor, but they can actually lead to credential leakage, especially on shared machines or compromised browsers. Imagine someone logging in to an application on a hotel lobby kiosk and accidentally saving their credentials; the next user could stumble right into that account.

Another one is missing security headers, things like X-Content-Type-Options, Strict-Transport-Security, X-Frame-Options, or Content-Security-Policy. Clients will say, “We’re not a bank, do we really need this?” But those headers protect against very real attacks like MIME-type confusion, clickjacking, and XSS. It’s not about being a high-value target; it’s about basic hygiene and defense in depth.

Verbose error messages and exposed stack traces are also common. Sometimes they show up even in production environments, and they often reveal server software or version numbers, like Nginx, Apache, or ASP.NET. That information gives attackers a clearer profile of your stack, making it easier for them to look up vulnerabilities that may exist for those versions.

Once during an engagement, a consultant was able to obtain an encryption key from a verbose error message. Using the encryption key, the consultant was then able to decrypt application responses, revealing IDOR vulnerabilities and obtaining sensitive information.

We also get pushback on outdated JavaScript libraries. You might have 10 or 15 of them in use, and while none may seem critical, the reality is that some of these libraries have known vulnerabilities, things like XSS, or even worse. And with supply chain risks being what they are, depending on unpatched third-party code isn’t something to ignore. Yes, we understand that updating might break functionality or take time, but there should at least be a plan, or compensating controls, in place.

Account lockout and rate limiting is another one. Some clients feel that unlimited login attempts aren't that big of a deal, especially in lower-threat environments. But without those controls, you’re wide open to brute-force attacks. Adding lockout logic or implementing MFA significantly reduces that threat, and it’s often low effort.

Open directory listings tend to be overlooked as well. On the surface, they look harmless. But over and over again, we find sensitive files in these directories, things like config backups, database exports, or old text files with usernames and passwords. It’s rarely just an empty folder.

Next, there’s the session cookie configuration. Missing Secure or HttpOnly flags might seem like checklist items, but they’re there for a reason. Without them, session tokens are easier to steal via XSS or MitM attacks. These flags are simple to implement and provide meaningful protection.

Last but not least, let’s talk about TLS/SSL vulnerabilities. These are fairly common findings during penetration tests, and clients often question why they’re included in the report. But the reality is, if your server is still supporting outdated protocols like TLS 1.0 or 1.1, or weak ciphers like RC4 and obsolete CBC suites, that’s a real security concern, as an attacker could intercept and decrypt traffic between a client and a server.

The Tricky Part

The tricky part about remediating all this is compatibility. If your organization has a large or diverse client base, especially users with older browsers, disabling these older protocols and ciphers can break access for some users. We see this frequently in sectors like banking, insurance, and finance, where customers may still be using outdated versions of Internet Explorer or similarly old platforms. That makes it harder to just flip a switch and fix the issue overnight.

Another challenge we encounter is when clients are relying on a cloud service or infrastructure that limits their ability to control which TLS/SSL protocols or cipher suites are used. Sometimes the WAF or load balancer dictates what's available, and your options are constrained. In those cases, your hands are somewhat tied.

So yes, these may seem like small things, but in the real world, attackers don’t need a dozen critical flaws to succeed. Sometimes, one weak link is all it takes. That’s why we include these in our reports. And, as an elite cybersecurity consulting firm, we would be remiss to not include such helpful, albeit sometimes minor, findings.

Want Us to Exclude Findings? Let’s Talk.

The best time to customize how findings are reported is during the engagement kickoff call. If you're already aware of certain issues and don’t want them included, or prefer they be moved to an appendix or marked as Informational, let us know. We routinely tailor reports for clients based on that kind of feedback.

On the flip side, if there are specific vulnerabilities you care more about, tell us! We’ll focus more deeply on those and spend more time digging into the areas you prioritize. Your issue is our customization.

In the end, this is about balance. Helping your consultant(s) understand the security culture of your organization, whether it’s more conservative or more threat-tolerant, makes the engagement smoother for everyone. The better we understand your expectations, the more actionable, relevant, and valuable the final report becomes.

If you're interested in our penetration testing or web application services, get in touch with our team.