By Larissa Kolver, Head of Cyber Security at Securecom
Author Introduction
As a security leader focused on measurable outcomes, I bridge the gap between threat intelligence and day-to-day engineering, turning findings into fixes that executives can track. My background spans SOC operations, vulnerability management, and programme governance, which means I care as much about cadence and remediation as I do about discovery. I’ve seen first-hand how point-in-time checks leave Kiwi organisations exposed between releases, so my mission is simple – help teams move to evidence-driven assurance that keeps pace with change while keeping compliance costs in check.
Key Takeaway: Value shows up quickly when you embed testing into everyday delivery – connect ticketing and chat, set a ruthless cadence for fixes and retests, and track a small set of KPIs that drive behaviour.
Outline:
- Days 0-14 – connect tools, agree change windows, kick off baseline scans and an initial manual test
- Days 15-45 – prioritise fixes with product owners, establish fast-path retests for high risk changes
- Days 46-90 – review findings trends, unblock remediation bottlenecks, rehearse breach playbooks
- KPIs to watch – time-to-first-report, critical fix lead time, retest pass rate, reduction in open criticals, audit evidence readiness
Introduction
When organisations move from point-in-time penetration tests to a continuous testing programme, the first 90 days determine whether you realise value or simply add more noise to the backlog. Success is not just about running more tests. It is about integrating triage and retest into existing workflows, measuring the right things, and creating a governance rhythm that keeps remediation moving. Done well, continuous assurance supports ISO 27001 Annex A 8.8 expectations for proactive vulnerability management and provides clean evidence for audits. (ISMS.online)
There is also a New Zealand context that matters. If you ever face a notifiable privacy breach, you must assess and notify the Privacy Commissioner and affected people as soon as practicable, with the Office of the Privacy Commissioner expecting notification within 72 hours where possible. Operationalising security testing and remediation helps you meet these obligations and makes breach assessments faster and better informed. (New Zealand Legislation)
Finally, avoid the common trap of equating automated vulnerability scans with true penetration testing. Scans provide broad, automated coverage; penetration testing adds accredited, manual exploitation to validate real business impact. Your programme should use both in tandem – automation for cadence and human-led testing for depth. (www.firemon.com)
Days 0-14 – set up and first signals
1) Define scope and roles
List in-scope systems and environments, owners, change windows, and test constraints. Confirm how production will be exercised safely and who approves attack windows. Include Legal and Privacy in the loop early so that safe-harbour and breach-notification considerations are understood.
2) Connect the workflow
Integrate your testing platform with ticketing and chat so that each verified finding creates an actionable ticket with severity, exploit narrative, and replication steps. Aim for tickets to land in the right squad’s backlog with a clear SLA for triage. This streamlines DevSecOps and reduces time to patch. (IBM)
3) Establish cadence and kick off tests
- Start continuous external scans to surface obvious exposure quickly
- Launch an initial manual penetration test on a high value web app or API to calibrate depth and reporting style
- Agree retest windows for critical findings up front so you do not lose momentum
4) Produce the first executive view
Within two weeks, publish a one-page summary: time-to-first-report, number of criticals, top 3 exploit narratives, and next retest date. This keeps leadership focused on outcomes, not tool output.
KPI targets for Day 0-14
- Time-to-first-report – under 10 business days from kick-off
- Ticket auto-routing accuracy – over 90 percent of findings land with the right team
- Retest window agreed – within 7 to 14 days for criticals
These are pragmatic starting points you can tighten later.
Days 15-45 – prioritise fixes and accelerate retests
5) Prioritise with product owners
Do not fix by CVSS score alone. Prioritise items that combine exploitability with business impact – for example, chained findings that traverse auth boundaries or expose sensitive data classes. This is where manual testing adds value beyond scanning. (www.firemon.com)
6) Establish fast-path retests
Create a fast-path for retests on criticals and on high risk releases. A named triage squad should own the queue, with service levels for retest confirmation. Retesting quickly shortens exposure windows and builds trust with delivery teams.
7) Tune notifications and dashboards
Avoid alert fatigue. Configure channel notifications for only P1-P2 events and weekly digests for the rest. Provide DevSecOps-aligned dashboards that visualise trend lines and team-level backlog. This reinforces collaboration and reduces patch times. (IBM)
KPI targets for Day 15-45
- Critical fix lead time – median under 14 days
- Retest pass rate – at least 70 percent of fixes pass on first retest
- Reduction in open criticals – down by 25 to 40 percent from baseline
- Evidence readiness – every closed item has proof of fix and a retest artefact linked to the ticket
These measures prove the programme is changing behaviour, not just producing reports.
Days 46-90 – embed governance and prove value
8) Run a monthly trend review
Hold a cross-functional review including Security, Product, Engineering, and Privacy. Discuss trends, blockers, and upcoming change windows. Compare performance across squads to spotlight what helps remediation flow faster. In New Zealand, tie this to breach tabletop exercises so your team can quickly assess and notify when needed. (privacy.org.nz)
9) Align to ISO 27001 Annex A 8.8
Map your process to Annex A 8.8 – how you identify, assess, and remediate technical vulnerabilities, how you stay on top of threat intelligence, and how you demonstrate control effectiveness. Keep an asset and vulnerability register current to satisfy auditors and reduce audit preparation overhead. (ISMS.online)
10) Local threat context in reporting
Reference relevant New Zealand threat insights in your dashboards or monthly summaries. NCSC’s quarterly Cyber Security Insights series provides useful context on incident patterns and sectors of interest. Using local data helps boards and executives internalise risk and supports prioritisation. (NCSC NZ)
KPI targets for Day 46-90
- Critical fix lead time – median under 10 days
- Retest pass rate – 80 percent or better
- Reduction in open criticals – 50 percent from baseline
- Evidence readiness – 100 percent of closed items are audit-ready
- Time to triage new high risk findings – under 2 business days
These targets align with an outcomes-first programme and create a defensible story for regulators, auditors, and the board.
What good looks like by the end of 90 days
- Continuous assurance is routine – scan results and manual findings land as tickets, are triaged within agreed SLAs, and retested quickly. Delivery teams see security issues as normal work items, not special projects.
- Fewer open criticals and faster fixes – your backlog of criticals is trending down and median fix time is improving monthly. DevSecOps practices are freeing security staff for higher value work. (IBM)
- Audit and privacy evidence is tidy – every closed ticket links to proof-of-fix and a retest artefact. You can demonstrate an ISO 27001-aligned process and you have rehearsed breach assessments using local OPC guidance. (ISMS.online)
- Clarity on pen testing vs scanning – stakeholders understand you need both breadth and depth. Scans are for coverage; accredited testers validate real business impact. (www.firemon.com)
Common pitfalls to avoid
- Over-reliance on tooling – large scan outputs without exploit narratives overwhelm teams. Insist on clear exploitation context for high priority items. (www.firemon.com)
- No fast retest lane – delaying retests allows regressions and reduces confidence. Bake retest SLAs into the workflow from day one.
- Metrics that don’t move behaviour – counting findings is less useful than measuring speed to fix and reduction in open criticals. Focus on lead indicators that teams can influence week to week.
The KPI set – precise definitions you can lift into your dashboard
- Time-to-first-report – number of business days from kick-off to delivery of the first actionable report. Lower is better because it shortens the initial exposure window.
- Critical fix lead time – median calendar days from ticket creation to deployment of a verified fix for severity critical items.
- Retest pass rate – percentage of fixes that pass verification on the first retest in a given period.
- Reduction in open criticals – percentage change in the count of open critical tickets compared with the Day 0 baseline.
- Audit evidence readiness – proportion of closed tickets that include proof-of-fix and a linked retest artefact.
- Time to triage new high risk findings – median business hours from creation to first accepted assignment in the right squad.
These are the measures most correlated with real risk reduction and audit readiness in a continuous testing context.
Next Steps
- Create a shared KPI dashboard and set conservative targets for the first 12 weeks
- Nominate a cross-functional triage squad with explicit ownership of retest workflow
- Schedule monthly trend reviews and a quarterly improvement session that includes privacy and incident response
- Map your evidence to ISO 27001 Annex A 8.8 and make sure you can rehearse OPC NotifyUs steps if needed
- Document your pen testing vs scanning policy so the difference is clear to teams and auditors (ISMS.online)
How do I get started with Pen Testing as a Service?
Contact us today to discuss how Securecom Pen Testing as a Service deliver real business outcomes: www.securecom.co.nz/contact-securecom
Why this approach works in New Zealand
- It aligns your security operations with ISO 27001 Annex A 8.8, which expects continuous management of technical vulnerabilities, not sporadic projects. (ISMS.online)
- It improves time to patch by integrating with DevSecOps practices your delivery teams already understand. (IBM)
- It strengthens readiness to assess and notify under the Privacy Act 2020 should a serious breach occur. (New Zealand Legislation)
- It keeps leadership grounded in local threat insight using NCSC’s quarterly reporting. (NCSC NZ)
PTAAS Blog Series
Rethinking Security Testing: A 5-Step Guide to Continuous Assurance
- Why annual pen tests are failing
- Building the business case for continuous penetration testing – how to model risk, cost, and compliance
- How to evaluate security penetration testing approaches and providers without getting trapped by ‘checkbox’ security
- Decision and contracting guide – selecting and onboarding a continuous pen testing partner
- Your first 90 days with a continuous penetration testing programme – metrics, rituals, and realising value

About the Author:
Larissa Kolver PMP®, AgilePM® – Head of Cyber Security, Securecom
Larissa is a seasoned cyber resilience leader who blends disciplined project governance with hands-on security engineering with over a 10-year career across financial, health and safety and technology sectors. At Securecom she heads the Security Operations function, translating continuous attack-surface insights into actionable remediation plans that executives can measure. Larissa is passionate about turning board-level risk appetite into practical cadence – replacing once-a-year checkbox tests with data-driven assurance tied to every release. Her mission is simple: help Kiwi businesses stay one step ahead of attackers while keeping compliance costs in check.
Concerned about security vulnerabilities in your application environments?
Talk to us about a PTaaS cadence that lowers your business risk.