Third-party risk management is the compliance workstream most commonly treated as a project by programmes that should be treating it as an operation. The initial vetting of a new vendor — questionnaires, documentation review, contract negotiation — is handled well at most organisations. The ongoing work of reassessing vendors, monitoring their posture over time, catching changes in their sub-processor lists, and offboarding them cleanly is handled poorly at most organisations. The result is a vendor inventory that’s accurate on the day it was built and decays predictably thereafter.
This article is the annual cycle for TPRM — the operational calendar that keeps the inventory current, the assessments fresh, and the offboarding clean. Built around the Shared Assessments SIG (2025 release) and the CSA CAIQ as the primary assessment tools, with tier-based cadence logic and explicit reassessment triggers.
TPRM as an annual project
The intellectual move that separates functional TPRM programmes from dysfunctional ones: treating the programme as a continuous operation with an annual project shape, rather than as a series of one-off assessments triggered by procurement events.
A continuous operation has a calendar. Each month, each quarter, and each year have declared activities. Vendors are assessed on declared cadences determined by their tier, not when someone remembers to schedule a reassessment. Reassessment triggers — time-based and event-based — are documented and enforced. Offboarding is a defined process with evidence, not a mental note that the vendor isn’t being used anymore.
The one-off-assessment model produces a TPRM dataset that’s artificially clean at procurement time and increasingly wrong thereafter. The continuous operation model costs more to run and produces a TPRM dataset that stays approximately current.
The editorial position: you can run a world-class TPRM programme on a spreadsheet with disciplined cadence, or a dysfunctional TPRM programme on an expensive TPRM platform. Tooling matters less than the decision to actually operate the programme continuously.
The annual cycle at a glance
A mature TPRM programme with an active vendor portfolio of 100–500 vendors looks approximately like this:
Monthly: Vendor inventory reconciliation, new-vendor onboarding assessments,
incident/breach monitoring (news and vendor notifications), SIG/CAIQ
responses reviewed, tier changes processed
Quarterly: Critical-tier vendor reassessment batch, high-tier vendor sampling,
vendor policy review, scope change tracking, vendor-side audit
report receipts (SOC 2 reports, ISO certificates)
Semi-annual: Vendor tiering model review, assessment methodology review,
cross-functional programme review
Annual: Full high-tier vendor reassessment, medium-tier reassessment,
low-tier sampling, vendor inventory audit, programme metrics
review, vendor exit reviews, tooling and framework refresh
Critical-tier vendors drive the quarterly rhythm; the annual cycle covers the broader portfolio. This is the cadence that produces a TPRM dataset that stays current enough to defend to an auditor or respond to a customer security questionnaire.
Vendor tiering framework
Tiering is the most important decision in TPRM programme design. Uniform assessment depth across a vendor portfolio wastes effort on low-risk vendors and under-invests in high-risk ones. Tier-based cadence concentrates effort where it matters.
The four tiers most programmes use:
Critical. Vendors whose failure or compromise would materially affect the organisation’s ability to operate or would expose sensitive customer data. Cloud infrastructure providers. Primary database-as-a-service. Core payments infrastructure. Single-vendor-dependency SaaS that, if lost, would cause substantial disruption. Typically 2–10 vendors. Reassessment cadence: quarterly.
High. Vendors with access to sensitive data or material operational dependency, but not at the critical tier. Secondary infrastructure. Tools that handle PII, PHI, or payment data in contained scopes. Major third-party integrations. Typically 15–50 vendors. Reassessment cadence: semi-annual, formal full reassessment annual.
Medium. Vendors with access to some business data but not sensitive customer data, or material operational dependency that’s quickly replaceable. Marketing tools, analytics platforms, HR tools, professional services firms. Typically 30–150 vendors. Reassessment cadence: annual.
Low. Vendors with minimal data access and easily replaceable. Office supplies, travel booking, single-use SaaS. Typically 50–300 vendors. Reassessment cadence: every two years, or on material change.
The tiering criteria are not universal. An organisation handling regulated health data will tier differently from one handling marketing analytics. What matters is documenting the tiering criteria, applying them consistently, and revisiting the tier assignment at least annually.
A common failure pattern: vendors get tiered at onboarding and never re-tiered. A vendor that started as Medium may have expanded scope into territory that warrants High. A vendor that started as Critical may have been de-scoped. Annual tier review catches drift.
Phase-by-phase: onboarding, assessment, reassessment, offboarding
The four phases every vendor moves through. Programmes routinely over-invest in the first two and under-invest in the last two.
Onboarding. Initial vetting before procurement. Sanity check against prohibited vendors, quick tier assessment, questionnaire (usually SIG Lite or equivalent for first contact), documentation review (SOC 2 report, ISO 27001 certificate, pen test summary if available), contract review including data processing terms. Output: onboarding assessment record, tier assignment, procurement go/no-go recommendation.
Full assessment. More thorough review after onboarding, typically for High and Critical tier vendors. SIG Core (627 questions in the 2025 release) or equivalent, architectural review, sub-processor inventory, residual risk evaluation. Output: detailed assessment record, compensating controls documented, outstanding risk items tracked.
Reassessment. The operational rhythm — running at cadences determined by tier plus triggers (covered below). Updated questionnaires, refreshed documentation, verification that sub-processor lists haven’t changed materially, review of any incidents or concerns during the period. Output: reassessment record showing whether vendor posture has improved, degraded, or held steady.
Offboarding. The most commonly skipped phase. When a vendor contract ends, when the vendor is replaced, or when scope changes eliminate the need for the vendor. Documented revocation of vendor access, confirmation of data return or deletion per contract terms, update of vendor inventory, notification to any affected downstream processes. Output: offboarding record with completion evidence.
The pattern worth taking seriously: programmes that handle onboarding and assessment well but fail at reassessment and offboarding have vendor inventories that are clean at entry, stale in the middle, and wrong at the exit. Auditors notice. Customers’ procurement teams notice. Regulators notice after incidents.
SIG and CAIQ timing in the cycle
Two dominant vendor assessment tools with different structures and appropriate uses.
SIG (Standardized Information Gathering) is maintained by Shared Assessments. The 2025 release (September 2025) added ISO 42001 AI governance mapping, updated PCI DSS 4.0 and ISO 27001:2022 references, and refined coverage of AI-related risks. Three scope levels: SIG Lite (128 questions), SIG Core (627 questions), full SIG (1936 questions). Different scopes for different tiers.
CAIQ (Consensus Assessments Initiative Questionnaire) is maintained by the Cloud Security Alliance. Designed specifically for cloud service providers. Maps to the CSA Cloud Controls Matrix (197 control objectives across 16 domains). CAIQ Lite (71 questions) covers all 16 control domains in compressed form.
In an annual TPRM cycle, timing for each:
- SIG Lite at onboarding for most vendors. Fast, broadly informative, and usable as a triage artefact.
- SIG Core at full assessment for High and Critical tier vendors. Depth proportional to data access and dependency.
- CAIQ for cloud infrastructure vendors specifically. A cloud-first vendor with a CAIQ response answers cloud-specific questions SIG is structurally not designed to probe.
- Custom SIG for vendors in regulated areas where specific regulatory mapping matters (healthcare, payments, financial services). The 2025 SIG supports generating custom subsets by regulation, control family, or risk domain.
- Reassessment typically uses delta-updated SIG — reviewing last year’s responses, flagging changes, requesting new evidence for material shifts.
The editorial position worth stating: neither SIG nor CAIQ is sufficient alone for critical-tier vendor assessment. Both tools plus architectural review, penetration test summaries, and direct auditor engagement where appropriate. Single-questionnaire assessments of critical vendors produce false confidence.
Reassessment triggers
Time-based reassessment cadences (quarterly for critical, annual for high/medium, etc.) are the baseline. Event-based triggers extend the cadence for specific circumstances. A mature programme has both documented and enforced.
Time-based: Tier-based cadence as above. Policy should specify the cadence in writing and the process should automate reminders against it.
Vendor-initiated events: Acquisition or material ownership change. Material change in sub-processor list. New jurisdictions added (or lost). Significant scope expansion. New data types processed. Breach notification from the vendor.
Your-organisation-initiated events: Material expansion of your use of the vendor. New data types sent to the vendor. New jurisdictions involved. Reclassification of the vendor to a higher tier. Compliance scope change affecting the vendor.
External events: Publicly disclosed vendor incident or breach (CVE affecting their infrastructure, ransomware, data leak). Regulatory action against the vendor or their sub-processors. Major shift in their public security posture (lost certification, new certification obtained). Widely-reported industry event that affects vendor category (Log4j-style cross-vendor impact).
The discipline: each trigger type has a defined response, not “let’s see if we need to do something.” Breach notification triggers a compressed reassessment within X business days. Sub-processor change triggers a review with documented yes/no. Acquisition triggers a tier review.
Where TPRM programmes commonly underperform
Five patterns account for most TPRM programme weakness.
Inventory drift. The vendor inventory was accurate at onboarding and has decayed since. New procurement the GRC team didn’t know about; existing vendors quietly expanded scope; sub-processors added without disclosure; vendors replaced without offboarding. Mitigation: monthly inventory reconciliation against procurement records, HR system for SaaS subscription discovery, and finance system for vendor payments.
Reassessment gaps. Critical-tier vendors get quarterly reassessment for two quarters, then procurement pressure or team bandwidth causes the cadence to slip. By year-end, critical vendor reassessments have been done half as often as policy specified. Mitigation: automated cadence tracking, hard enforcement (completion required before the team closes out the quarter), and management review of missed cadences as a programme metric.
Over-reliance on vendor-supplied reports. Vendor-supplied SOC 2 reports, ISO 27001 certificates, and penetration test summaries are useful inputs but they’re not substitutes for assessment. Vendors’ public reports describe the vendor’s controls, not how the vendor’s controls interact with your scope. Mitigation: treat vendor reports as evidence inputs, not assessment outputs.
Offboarding skipped. The most common failure. Vendors leave but remain in the inventory, still appear to have access, and still carry your data. When procurement flags the change, the TPRM team isn’t always notified. Mitigation: procurement notification of any vendor change as a first-class workflow event, and quarterly inventory reconciliation that catches what procurement notification missed.
SIG responses accepted without verification. Vendors’ SIG responses reflect what vendors say about their controls, not what auditors would find. A meaningful fraction of SIG responses don’t survive auditor scrutiny. Mitigation: for Critical and High tier vendors, material SIG claims should be verified against independent evidence (a SOC 2 report, a pen test summary, architectural documentation) rather than accepted on the vendor’s word.
The underlying pattern: TPRM is an area where programmes routinely over-invest at the beginning of the vendor relationship (onboarding vetting is usually good) and under-invest at every subsequent stage. Reversing that emphasis — accepting that onboarding is necessarily incomplete and treating the ongoing operation as the serious work — produces materially better programmes at marginal additional cost.
FAQ
How many vendors should we be assessing?
All of them that have access to systems, data, or operational dependencies that matter. The tiering framework decides assessment depth; the inventory itself shouldn’t exclude vendors. Excluding vendors from the inventory “because they’re small” is the origin of most TPRM blind spots.
SIG or CAIQ — which should we use?
Both, for different vendors and stages. SIG for broad vendor assessment across categories; CAIQ for cloud service providers specifically. Most mature programmes use SIG as the primary tool with CAIQ layered in for cloud vendors.
How often should critical vendors be reassessed?
Quarterly is the common cadence. Monthly is appropriate for the very smallest number of most-critical vendors (primary cloud infrastructure, for example). Annual reassessment of critical-tier vendors is a gap that auditors catch.
Do we need a TPRM platform?
At small vendor portfolio sizes (under ~50 vendors), a well-organised spreadsheet works. Between 50 and 200, platforms start to pay off — primarily through reducing the operational overhead of cadence management and evidence tracking. Above 200, platforms are usually economically justified.
What’s the role of continuous monitoring services?
Services like Bitsight, SecurityScorecard, UpGuard, and others produce external-view scores of vendor security posture, updated continuously. Useful as a monitoring input but not a substitute for questionnaire-based assessment. Best treated as an early-warning signal for triggered reassessment.
How do we handle vendors who won’t complete our questionnaires?
For Critical and High tier vendors: this is usually a dealbreaker. A vendor that won’t complete a SIG or equivalent can’t be assessed and therefore can’t be properly risk-managed. For Medium and Low: may be acceptable depending on context, with compensating documentation (the vendor’s public-facing SOC 2 report, for instance). Document the decision.
Should we accept a SIG Lite for a critical-tier vendor?
As onboarding triage, yes. As assessment depth, no. Critical-tier vendors warrant SIG Core or equivalent, plus additional evidence.
What’s the 2025 SIG update about AI governance?
The September 2025 SIG release added references to ISO 42001 (AI Management Systems) and expanded coverage of AI-related risks. For vendors using AI meaningfully, the 2025 SIG probes AI-specific practices in ways earlier versions didn’t.
How do we document offboarding?
An offboarding record per vendor capturing the contract end date, access revocation confirmation, data deletion or return confirmation per contract terms, residual obligations (if any), and closeout sign-off. Retained as evidence of compliance with data minimisation obligations.
What happens when a vendor’s sub-processor changes?
Triggered reassessment. The sub-processor change may or may not be material depending on what the sub-processor does and what data it accesses, but the trigger itself is non-optional. Document the assessment outcome and any resulting contractual or operational changes.