By Larissa Kolver, Security Operations Manager at Securecom
Author Introduction
Standing in the Securecom Security Operations Centre, I watch our security dashboards light up with change tickets and fresh threat intel every single day. That pace of shift is exhilarating – and alarming – because an annual pen test simply cannot keep up. Let’s explore why the risk gap is widening and what early-move leaders are doing about it.
Key Takeaway: Annual, point-in-time tests cannot keep pace with frequent change, expanding attack surface, and New Zealand privacy obligations. Leaders need a path to continuous assurance rather than one-off checks.
Outline:
- The real driver of risk today – rapid change across cloud, SaaS, APIs, mobile, and AI
- Why point-in-time testing creates blind spots between releases
- Common confusion – vulnerability scanning vs true penetration testing
- Local context – obligations to detect, assess, and notify privacy breaches promptly in NZ
- What good looks like at this stage – visibility, cadence, and readiness to respond
Introduction
If your organisation has shifted to cloud platforms, integrated more SaaS, or ships code weekly, your security posture changes far more often than an annual test. That gap between releases is where risk quietly accumulates. Compounding this, many teams still rely on basic scanning and mistake it for a full penetration test. The result is a false sense of safety while exposure widens. For NZ organisations, this is not just a technical problem – it is a governance issue tied to breach assessment and notification readiness under local privacy expectations. This article explains why the operating tempo of modern delivery requires a different approach and what early indicators of maturity look like.
The real driver of risk today – rapid change across cloud, SaaS, APIs, mobile,
and AI
Modern delivery practices and platform choices expand the attack surface faster than periodic controls can keep up. Every new internet-facing service, exposed API, mobile release, or infrastructure change slightly alters your risk profile. Combine that with AI-related changes and data flows, and the overall posture is in motion most weeks of the year.
Two industry dynamics make this particularly acute:
- Attack patterns are shifting: The 2025 Verizon Data Breach Report found that Vulnerability exploitation has overtaken phishing as a leading incident driver, and web applications remain a favourite doorway. Annual testing does not reflect that reality quickly enough to matter.
- Change velocity is relentless: Frequent code releases, cloud migrations, SaaS integrations, and API exposure all increase attack surface and easily outpace point-in-time testing rhythms.
When the environment changes weekly but assurance is checked yearly, you are making decisions on stale signals.
Why point-in-time testing creates blind spots between releases
Point-in-time tests are a snapshot. They do not cover what happens after the test window closes. If your major releases, configuration changes, or third-party updates land between those snapshots, you are exposed without knowing it.
Recent market evidence underscores the problem:
- More findings per test, longer to fix: Some providers saw a 21 percent increase in findings per engagement in 2023, while mean time to repair is rising due to constrained teams and budgets. That combination expands the window of exploitability if testing cadence does not improve.
- Threats do not follow audit schedules: New weaknesses arrive via code pushes, infra changes, and emerging zero-days, so a once-a-year exercise cannot realistically maintain a current picture.
In other words, a yearly test might tell you yesterday’s story. The risk you need to manage is today’s, and it is evolving between releases.
Common confusion – vulnerability scanning vs true penetration testing
Many teams use vulnerability scanning as their primary control and assume the job is done. Scans enumerate known weaknesses; penetration testing simulates realistic exploitation to validate how far an attacker could go and what the business impact would be. The difference is not semantics – it is outcome. A pass on a scanner does not prove an attacker cannot chain low-severity issues into a critical compromise.
This distinction matters for prioritisation and remediation funding:
- Scans are breadth-oriented and fast, ideal for early detection and hygiene.
- Pen tests apply human creativity to chain flaws, bypass controls, and demonstrate impact, which helps you fix what matters first and justify the effort to do so.
Treating scans and pen tests as interchangeable creates blind spots and can lull stakeholders into a false sense of security.
Local context – obligations to detect, assess, and notify privacy breaches promptly in NZ
In New Zealand, leaders are accountable for how quickly their organisation can detect, assess, and notify notifiable privacy breaches. That expectation means assurance cannot be an annual ceremony; it must be operationalised so you know your exposure in close to real time and can respond credibly if something goes wrong.
Regulators and auditors also expect that vulnerability management is not a paper process. ISO 27001 programmes emphasise managing technical vulnerabilities and system security testing, even if not every control explicitly mandates a pen test. The spirit is clear: keep vulnerability management live, not episodic.
Finally, NZ breach-notification duties reference acting as soon as practicable. Your readiness to do that hinges on having timely evidence about your current weaknesses and whether fixes have worked, not just last year’s report.
What good looks like at this stage – visibility, cadence, and readiness to respond
Before choosing approaches or partners, it helps to define early indicators of maturity:
- Visibility that matches change: Maintain up-to-date inventories of internet-facing assets and APIs. Map which systems change most often and align testing and scanning rhythms accordingly. Couple automated hygiene with targeted human testing where risk and change are highest.
- Cadence tied to releases, not the calendar: Move from annual cycles to event-driven testing and periodic checks. For example, schedule manual verification for critical features, major releases, or significant configuration changes. Complement this with frequent automated checks to catch regressions early.
- Clear separation of scan vs pen test outcomes: Document the difference internally so executive stakeholders understand that “no high-severity scan findings” does not equal “resilient against chained attacks”. Use exploitation narratives to anchor priorities.
- Evidence that remediation works: Track time-to-first-report, critical-fix lead time, retest pass rate, and the reduction in open criticals. These are leading indicators that your security programme is shrinking risk windows rather than just creating reports.
- Governance that drills response: Align findings reviews with privacy breach exercises so decision-makers, legal, and communications teams can move quickly and confidently if something escalates.
These markers are intentionally solution-agnostic. The goal is not to prescribe a specific model here, but to make sure your team focuses on outcomes that actually reduce exposure between releases.
Practical guidance you can act on this quarter
If you need momentum without triggering a full change programme, try these steps:
- Time-boxed mapping: Create a simple one-page map of the last 12 months of releases and platform changes, and lay it against your last independent pen test date. Highlight the longest gaps. This often surfaces high-risk areas where you could introduce targeted testing without boiling the ocean.
- API and internet-facing inventory: List the systems and APIs exposed to the internet, including mobile backends and admin interfaces. Note which ones changed since the last assurance window.
- Clarify language with stakeholders: Socialise a short explainer for executives and product owners that distinguishes scanning from penetration testing and explains why exploitation narratives matter for prioritisation.
- Define a minimal cadence: Propose an interim cadence that blends frequent automated checks for hygiene with manual verification for high-impact assets at key change events. You can pilot this on one critical product or customer-facing service first.
- Align with NZ privacy expectations: Confirm internally how potential notifiable breaches would be detected, assessed, and notified. Ensure the teams responsible for privacy compliance are part of the conversation about cadence and evidence.
Conclusion
Annual penetration tests served an earlier era when systems changed slowly, and attackers largely relied on well-known patterns. Today, cloud adoption, SaaS interconnects, rapid shipping, API proliferation, and AI-powered workflows mean your environment is different every month. Meanwhile, objective data shows more issues are being uncovered per engagement, and teams are taking longer to remediate, expanding the window where attackers can succeed if you are not testing and fixing continuously.
The right first move is not to buy anything. It is to recalibrate internal expectations: educate stakeholders on the difference between scanning and true penetration testing, acknowledge that threats do not wait for audits, and tune your assurance cadence to the way your organisation actually changes. Once that mindset is in place, you will be in a better position to evaluate approaches that deliver timely evidence, shorten exposure windows, and support the NZ obligation to assess and notify breaches promptly.
Next Steps
- Map your last 12 months of releases and major platform changes against your last pen test date.
- List critical internet-facing assets and APIs and note how their risk profile has changed since the last test.
- Clarify internally the difference between a vulnerability scan and an accredited penetration test.
- Start a conversation with Risk and Product leaders about what an appropriate testing cadence could look like.
- Reach out to me below if you want any guidance.

About the Author:
Larissa Kolver PMP®, AgilePM® – Security Operations Manager, Securecom
Larissa is a seasoned cyber resilience leader who blends disciplined project governance with hands-on security engineering with over a 10-year career across financial, health and safety and technology sectors. At Securecom she heads the Security Operations function, translating continuous attack-surface insights into actionable remediation plans that executives can measure. Larissa is passionate about turning board-level risk appetite into practical cadence – replacing once-a-year checkbox tests with data-driven assurance tied to every release. Her mission is simple: help Kiwi businesses stay one step ahead of attackers while keeping compliance costs in check.
Concerned about security vulnerabilities in your application environments?
Talk to us about a PTaaS cadence that lowers your business risk.