Why the CISO’s hand is stronger now
When coding assistants, retrieval systems, and autonomous agents sit on the critical path of delivery, “security” stops being only a gate at release and becomes a question of how work is structured: data boundaries, identity, supply chain, model governance, and evidence for auditors. That pushes the CISO and security architecture function toward the same tables where product and platform leaders decide roadmaps—especially where NIST Cybersecurity Framework 2.0 explicitly elevates governance as a core function alongside identify, protect, detect, respond, and recover.
For generative AI specifically, NIST’s AI Risk Management Framework and its Generative AI profile give a vocabulary the whole enterprise can use: govern, map, measure, manage. That is leverage—if the organization funds the cross-functional work to use it.
Mythos versus the meter
Vendor myth-making—brand, benchmarks, safety theater—can dominate social feeds. In day-to-day risk management, those narratives are a thin layer on top of a thicker reality: software still ships with defects, disclosures still scale with ecosystem attention, and your backlog still reflects global publication rates more than any single company’s press cycle.
Put bluntly: the story of the month matters for reputation; the stock of disclosed issues matters for capacity planning. When prioritization, patching SLAs, and architecture standards are on the line, anchor to measurable baselines (publication trends, exploit intelligence, asset criticality)—not to whichever mythic frame is trending.
What the last five calendar years of CVE publications show
Security researcher Jerry Gamblin’s annual “CVE Data Review” series aggregates published CVE counts (with rejected records removed) from sources such as NVD JSON and the official CVE List in V5 form. The chart below uses his published figures for 2022–2025; 2021 is derived from the 2022 review’s stated year-over-year growth (+24.51% over 2021), because a standalone 2021 annual post was not located on the same series.
Figure: Published CVEs per calendar year (excluding rejected). 2021 value back-calculated from the 2022 review’s +24.51% YoY. Sources: 2022, 2023, 2025 reviews (2024–2025 figures as stated in the 2025 review).
The directional takeaway is simple: the curve points up. Comparing full calendar years in the same methodology, 2023 to 2024 is roughly a 38% jump ((39,962 − 28,902) / 28,902); 2024 to 2025 is about +20.6% as reported in the 2025 review—still enough to set a new annual record (48,185 published CVEs).
Why a center of excellence in architecture—now
A center of excellence (CoE) is not a slide deck; it is a named coalition with authority to set patterns, review exceptions, and measure adoption. Gartner’s long-running guidance on cloud centers of excellence is a useful analogue: the CoE model exists precisely because federated speed without shared guardrails produces expensive drift.
For security architecture in an AI-accelerated SDLC, an effective CoE typically owns:
- Reference architectures for app stacks, CI/CD, secrets, and model-serving boundaries—aligned with CISA Secure by Design expectations that vendors (and internal platforms) build in safe defaults rather than bolting on policy after the fact.
- Threat modeling and abuse cases for AI features, including data leakage and prompt injection—see OWASP Top 10 for LLM Applications as a starting taxonomy.
- Evidence packs for procurement and audit: traceability from control to implementation, which CSF 2.0’s governance emphasis makes easier to defend.
- Metrics that resist theater: time-to-remediate for criticals, coverage of SBOM or dependency alerts, percentage of services under approved patterns—not vanity counts of tickets closed.
Gamblin’s 2025 analysis underscores why prioritization must be ruthless at high volume: with tens of thousands of published CVEs annually, “patch everything” is not a strategy. An architecture CoE makes explicit which surfaces are non-negotiable (internet-facing identity, data planes, build systems) and which risks are accepted with a paper trail—precisely the institutional behavior boards expect when AI compresses delivery timelines.
References and further reading
- Jerry Gamblin, 2025 CVE Data Review (figures for 2024–2025, methodology).
- Jerry Gamblin, 2023 CVE Data Review (2023 annual total).
- Jerry Gamblin, 2022 CVE Data Review (2022 total; YoY vs 2021).
- CVE.ICU — interactive CVE analytics (YTD, growth, CNA views).
- National Vulnerability Database (NVD) — U.S. government vulnerability database.
- CVE Program — CVE Records, CNAs, program policy.
- NIST Cybersecurity Framework 2.0.
- NIST AI Risk Management Framework and Generative AI profile.
- CISA Secure by Design.
- OWASP Top 10 for LLM Applications.
- Gartner, Execute Your Cloud Strategy With a Cloud Center of Excellence (CoE operating model patterns).