Loading...
Loading...
A literature review of GRC Engineering vs. Traditional GRC: how compliance is shifting from audit-calendar documentation to continuous, code-driven assurance.

Governance, Risk, and Compliance (GRC) has, for most of its history, been a discipline of documents. Policies sit in portals, controls are attested to in spreadsheets, and audits are dress rehearsals performed once or twice a year for an external reviewer. Over the last three years, a counter-movement has emerged under the banner of GRC Engineering. Its central belief is that the practice of governance should be rebuilt on the same primitives that reshaped software delivery: version control, automated testing, observability, and continuous delivery.
This article surveys the literature around that shift. We examine how practitioners and researchers have defined the traditional GRC paradigm, trace the origins of GRC Engineering as a named discipline, and compare the two across the dimensions of evidence, cadence, ownership, and organisational posture. Our aim is to help security and compliance leaders locate themselves on this spectrum and decide where investment in engineering practices is likely to pay off.
Traditional GRC, as described by the OCEG in its foundational GRC Capability Model (the "Red Book"), is the "integrated collection of capabilities that enable an organization to reliably achieve objectives, address uncertainty, and act with integrity." In practice, most mature programs have operationalised this definition through three artefacts: a control framework (often mapped from NIST SP 800-53, ISO 27001, or the SCF), a policy library, and an evidence repository maintained inside a GRC platform such as Archer, ServiceNow IRM, or OneTrust.
Racz, Weippl, and Seufert (2010), in one of the earliest academic treatments of the term, characterise GRC as an integration of three historically separate disciplines whose operating cadence is set by the audit calendar. Subsequent practitioner literature has consistently framed GRC as a coordination layer over existing control owners rather than as a producer of controls in its own right. The emphasis throughout this literature is on coverage, meaning that every obligation has a named owner and an attested status, rather than on assurance that the control is materially working at any given moment.
The consequences are well documented. Kaplan and Mikes (2012, Harvard Business Review) describe how rules-based compliance regimes generate a "false sense of security" when decoupled from operational reality. A compliance program that re-tests most of its controls only once per audit cycle, and that relies on manual evidence collection in the intervening months, is effectively flying blind between snapshots. This is not a controversial observation among practitioners; it is the animating frustration that the GRC Engineering movement is responding to.
The organisational precursor is visible at Netflix, which from 2021 onward began hiring "Security Compliance Engineers" and, by late 2023, was openly advertising a "Security Engineer (L4) - GRC" role with the explicit framing that the company had made "a strategic bet to build and bring engineering to GRC." The job title had not yet stabilised on "GRC Engineer," but the idea that GRC work belonged inside an engineering organisation was, by that point, operational rather than hypothetical.
The earliest traceable public articulation of GRC Engineering as a named, distinct discipline came from Ayoub Fandi, then a security risk and compliance professional in London. On 21 September 2023, Fandi posted on LinkedIn about "the GRC Engineer role" as the future of the field. In November of the same year he launched the GRC Engineering Podcast and founded the GRC Engineer newsletter and community. The GRC Engineering Manifesto followed, co-authored with a group of practitioners who have since built out the community infrastructure (Discord, podcast, newsletter) around it. In late 2024 Fandi incorporated Security GRC Engineering Ltd in the UK, and in September 2025 he and Michael Rasmussen jointly refined a formal definition of the discipline on Rasmussen's Risk Is Our Business podcast, which is as close as the field currently has to an agreed scope.
The manifesto itself lays out a set of principles that reframe compliance as a software problem: prefer code over documents, prefer continuous signals over periodic attestations, prefer stakeholder-centric design over auditor-centric reporting, and prefer open source and shared standards over proprietary platforms.
Chris Hughes, writing on Resilient Cyber, has argued that the movement is best understood as the GRC equivalent of the DevOps transition. It is a response to the same structural problem, namely a slow, documentation-heavy function bottlenecking a fast-moving delivery organisation, and it uses the same structural remedy of automation, shared tooling, and shifted ownership. A parallel argument has come from the controls-content side, most prominently from the Secure Controls Framework community, which contends that control catalogues must become machine-readable so that evidence collection can be wired directly to the systems of record.
Among the practitioners who have since adopted the framing, AJ Yawn has been one of the more visible. After earlier work on cloud compliance automation at ByteChek, Armanino, and Aquia, he took on the GRC Engineering title at Aquia and now at Compyl, and has contributed a book on GRC Engineering in AWS environments and talks at venues including fwd:cloudsec 2025.
Adjacent bodies of work have fed into the movement. Aaron Rinehart and Kelly Shortridge's Security Chaos Engineering (O'Reilly, 2023) supplies the epistemology, arguing that controls should be tested like production systems, with deliberate perturbation, rather than verified through self-attestation. The NIST Open Security Controls Assessment Language (OSCAL) project supplies the interchange format that makes control-as-code practical across vendors. The wider Policy-as-Code community, including Open Policy Agent, Cedar, and Rego, supplies the runtime.
The clearest way to see the distinction is across the lifecycle of a single control. Consider the obligation "production database access is restricted to approved personnel."
In a traditional program, this obligation lives as a policy statement, a mapping to an ISO 27001 Annex A control, and a quarterly access review exported from the IAM system into a spreadsheet, signed by a manager, and uploaded to the GRC tool. The evidence is a point-in-time artefact, and between reviews drift is invisible.
In an engineered program, the same obligation is expressed as a policy-as-code rule that queries the IAM system directly, runs on a schedule measured in minutes, emits a structured event on every violation, and is itself checked into version control and tested in CI. The "evidence" is the log of every evaluation. The "attestation" is the green status of a pipeline. Drift is observable in real time because the control is the observation.
Early adopters report that this shift moves the bulk of compliance effort from evidence collection to evidence curation. Audit preparation, historically a seasonal scramble, becomes a matter of querying an existing data store rather than assembling one from scratch. The effect on vendor-facing assessments, third-party risk reviews, and regulator inquiries follows the same pattern.
The differences extend beyond tooling into organisational structure. Traditional GRC sits within a second-line risk function, often reporting to the General Counsel or Chief Risk Officer, and interacts with engineering through ticketed requests. GRC Engineering, by contrast, is typically embedded, either as a specialised role within a platform or DevSecOps team, or as a hybrid function with dual reporting. Practitioner accounts consistently describe the defining shift as moving at least one engineer onto the compliance team whose primary output is code rather than documentation.
The movement is not without its sceptics, and the disagreement is worth taking seriously. Three critiques recur.
The first, advanced by Michael Rasmussen of GRC 20/20 Research, is that GRC Engineering risks re-inventing the audit trail without re-inventing governance. Automating evidence collection does not, by itself, improve the quality of risk decisions. It may simply produce more granular data for decisions that were already being made well or poorly. Rasmussen's position is that the engineering layer is necessary but insufficient, and that without investment in risk quantification (FAIR, Open FAIR) and in board-facing reporting, an automated program can still be strategically blind. His September 2025 collaboration with Fandi suggests this critique is increasingly being absorbed into the movement rather than standing outside it.
The second critique, from the assurance community, concerns auditor acceptance. While SOC 2 and ISO auditors have become comfortable with continuous evidence in principle, the practice of auditing a policy-as-code rule is still maturing. Attesting that the rule itself is correct, complete, and has been running unmodified over the audit period turns out to be its own discipline. Organisations that move faster than their auditors can sometimes find themselves producing evidence that is technically superior but procedurally harder to consume.
The third critique is cultural. Kelly Shortridge and others have argued that importing engineering practices without importing engineering culture produces the worst of both worlds: brittle automation maintained by people who do not own it, alongside compliance expectations that now assume a real-time cadence the organisation cannot actually sustain. The warning echoes earlier DevOps literature on "DevOps theatre" and is, in our reading, the most important caution in the current discourse.
For leaders deciding how to invest, the literature converges on a few practical recommendations.
Start where the evidence is already digital. Access reviews, configuration baselines, vulnerability management, and change control are the canonical first targets because the systems of record already produce structured data. The engineering work is integration, not instrumentation. Trying to engineer controls in domains that are still fundamentally human, such as vendor due diligence, policy exceptions, and conduct risk, is a later-stage problem and a common source of early failure.
Treat controls as products. The manifesto's emphasis on stakeholder-centric design is easy to dismiss as rhetoric, but the underlying point is load-bearing. A control has users (engineers, auditors, risk owners, regulators), and it succeeds or fails on whether those users can actually work with it. A control that is technically continuous but that no one trusts or understands is not meaningfully better than a quarterly spreadsheet.
Invest in the interchange layer. OSCAL, SCF, and the emerging generation of open control catalogues are not glamorous, but they are what allow an engineered program to survive vendor changes, framework additions, and acquisitions. The programs that appear most durable in the case-study literature are those that own their control content in a machine-readable form independent of any single GRC platform.
Measure the right thing. The metric that distinguishes a mature engineered program from a mature traditional one is not the number of controls automated. It is the mean time to detect control failure. A program that can answer "when did this control last fail, and for how long?" in seconds has crossed the threshold. A program that still answers that question with "we'll know at the next audit" has not, regardless of how much tooling sits underneath.
The literature on GRC Engineering is young, with the discipline named and formalised only in late 2023, but it is converging on a consistent picture. Traditional GRC is a coordination discipline built around the audit calendar. GRC Engineering is a production discipline built around continuous signals. The two are not mutually exclusive, and the most credible practitioners frame engineering as an evolution of GRC rather than a replacement for it. Governance questions, risk appetite, and board-level judgement remain irreducibly human.
What has changed is the substrate. The question for every compliance leader in 2026 is no longer whether control evidence will become code. That question has been answered by regulators (DORA, the SEC cyber disclosure rule, the EU AI Act) who increasingly expect real-time posture reporting. The open question is how quickly a given program can make the transition without losing the governance maturity it has spent a decade building.
At Applied Verdict, our working hypothesis, drawn from this literature and from our own client engagements, is that the organisations that navigate this shift best will be those that resist framing it as a tooling decision. It is an operating-model decision, and the tooling follows.