Pentiq

Penetration Testing

How to Prepare for a Penetration Test

A practical guide to preparing for penetration testing so the engagement produces actionable evidence with minimum disruption and maximum value.

Reviewed by: Lewis Warner, Chief Hacking Officer

Last updated:

How to Prepare for a Penetration Test

Preparation determines whether the test produces evidence or excuses.

A poorly prepared penetration test produces vague findings, missed scope, friction during the engagement, and a report that does not get acted on. A well-prepared one produces specific, actionable evidence and pays for itself in the first three findings. The difference is rarely the budget or the provider — it is the few weeks of internal work before testing begins.

This guide covers the practical preparation that makes the difference: defining what you actually need, getting the scope right, technical and organisational readiness, what not to do, and how to handle the engagement itself. It assumes you have selected a tester and have a date booked; if you are earlier in the process, several of these decisions will inform your provider selection too.

Define what you actually need

Penetration tests are not a single product. The right test depends on what question you are trying to answer:

  • Compliance assurance — you need a defensible test report aligned to a specific framework (PCI DSS, ISO 27001 Annex A 8.8, SOC 2). Methodology and reporting format matter.
  • Genuine risk assurance — you want to know the truth about your security posture, regardless of compliance optics. Scope can be wider and findings should be ranked by impact, not by audit category.
  • Pre-launch validation — you are about to launch something new and want a clean bill of health before exposure. Scope is narrow but depth is high.
  • Post-incident validation — you have remediated after an incident and need independent confirmation that the underlying issues are fixed.
  • Customer-mandated testing — a specific customer or contract requires it. Scope is usually defined externally; your job is to ensure it is realistic.

These needs overlap, but they do not all call for the same methodology, scope or report style. Be clear with your provider about which combination you actually need; a tester who hears “we need a pen test” without context will deliver a generic test, and generic tests deliver generic value.

Get the scope right

Scope is where most preparation work pays off, and where most engagements quietly fail when preparation has been skipped.

  • Specific in-scope assets. Specific IP ranges, specific URLs, specific applications, specific environments. “The website” is not a scope; “www.example.com and the customer portal at app.example.com, in production” is.
  • Explicit out-of-scope assets. Things that look in scope but are not — a third-party SaaS the application integrates with, a related domain owned by another business unit, a system due to be decommissioned. Stating these explicitly prevents accidental testing of assets you do not control.
  • Test depth. Black-box, grey-box or white-box. Grey-box (limited information, low-privileged account provided) is the most common because it balances realism with value.
  • Authentication. Provide accounts at multiple privilege levels in advance — ordinary user, elevated user, administrator. Authorisation flaws cannot be found without legitimate accounts to test from.
  • Authenticated and unauthenticated paths. Decide which apply. For a public-facing application, both usually do.

A common procurement mistake is under-scoping to fit a budget. An under-scoped test gives you under-scoped reassurance — the report says “everything we tested looked fine”, which is technically true and practically meaningless if the most important parts of the application were excluded.

Technical readiness

The environment under test should be representative of production and stable for the duration of the engagement.

  • Use production where possible. A representative staging environment with production-equivalent configuration and seeded data is acceptable; a sparsely populated dev environment is not.
  • Stabilise the environment. No major deployments during the engagement unless explicitly scoped in. Changing the target mid-test produces inconsistent findings and wastes tester time.
  • Provision test accounts in advance. All accounts the tester needs, at all required privilege levels. Last-minute account provisioning is the single most common cause of delay.
  • Coordinate WAF, IDS and rate-limiting. Allowlist the tester’s source IPs or coordinate around them. Testing through aggressive WAF rules tests the WAF, not the application.
  • Confirm logging is at sufficient verbosity. You will want to investigate findings later; thin logs make that impossible.
  • Verify backups and rollback paths. Properly scoped testing rarely causes outages, but you should be ready to recover if something unexpected happens.

Organisational readiness

A penetration test is also a coordination exercise across several internal stakeholders. Misaligned expectations cause more friction than technical issues.

  • Single point of contact. One named person from your side who owns the engagement day-to-day. Multiple uncoordinated contacts create delay and contradictory instruction.
  • Escalation path. A defined process for “stop the test” events — critical findings that warrant immediate notification, or unexpected production impact.
  • Communications plan. Who hears about findings during testing, who hears at the report stage, and who hears only after remediation. Decide this in advance.
  • Out-of-hours coverage. If testing runs out of hours, ensure someone reachable can authorise pauses or escalations.
  • Incident response team awareness. The IR team should know the test is happening, even if individual responders do not, so genuine incidents during the testing window are not dismissed as part of the test.
  • Legal and compliance briefed. Particularly important where the test will touch regulated data or where customer notification might be required if certain findings are confirmed.

What not to do

Some preparation mistakes are common enough to flag explicitly:

  • Do not panic-patch before the test. It is tempting to fix everything that looks obvious in the week before the engagement. This is usually counterproductive: it creates scope confusion, hides root causes, and produces a report that reflects neither your real posture nor your patched posture.
  • Do not change the application during the test. Deployments mid-engagement invalidate findings. If a critical fix is necessary, coordinate it with the tester.
  • Do not filter the tester out of WAF logs entirely. If the tester’s traffic is invisible to your monitoring, you lose the ability to validate detection. Allowlist for rate-limiting; preserve visibility.
  • Do not over-share the engagement brief. If you want any realism in the result, the broader team should not know the exact timing or scope. Awareness can be enough to subtly improve behaviour and contaminate the test.
  • Do not skip the retest. It is the difference between “we think we fixed it” and “it is provably fixed”. Always book the retest at the same time as the initial engagement.

During and after the engagement

Once the test is running:

  • Check in daily, or at the cadence the tester proposes. Daily summaries surface issues early and keep your team oriented.
  • Act on “stop the test” findings immediately. These are notified to you in real time, not held back for the report.
  • Plan remediation capacity in advance. Most reports land with a 4–6 week remediation window before the retest. Engineering should have that capacity reserved.

After delivery, the work that converts the report into actual risk reduction is yours. Triage findings against your environment, sequence remediation by impact and effort, and book the retest once the work is genuinely complete. A report is not a result — the fixed environment is.

Frequently asked questions

Should we use production or staging for the test?

Production is preferred because it is the only environment that fully matches your real exposure. A representative staging environment with production-equivalent data and configuration is acceptable.

How far in advance should we book?

Most reputable testers book 4–8 weeks ahead for non-urgent engagements, longer for complex or large-scale tests. Same-week availability usually indicates either an unbooked tester or an unsuitable one.

Should we tell our staff that a penetration test is happening?

Generally, no — narrow awareness to your security or technical lead and incident response team. Broader awareness can contaminate the test, particularly where social engineering or phishing scenarios are in scope.

Should we patch obvious issues before the test starts?

Routine patching as part of normal hygiene is fine. Panic-patching specifically to clean up before the test is counterproductive — it hides the issues the test is supposed to find and root-cause.

Next Steps

Found this useful?

Share it with your network on LinkedIn.

Share on LinkedIn