Category: All Blog

Axipro, the cybersecurity and compliance consulting firm, and Kertos, the European compliance automation platform, and  have entered a strategic partnership that combines software automation with hands-on implementation support for organisations navigating Europe’s expanding regulatory regime. The agreement, effective April 1, 2026, names Axipro as an implementation partner for Kertos. Customers can now buy the Kertos platform through Axipro alongside consulting, implementation support, and broader compliance service packages spanning frameworks including GDPR, NIS2, DORA, the EU AI Act, ISO 27001, and SOC 2. The partnership lands as European companies face mounting regulatory pressure. The NIS2 Directive pulled around 28,700 additional companies into scope when it replaced its predecessor in October 2024. DORA became fully applicable in January 2025, binding around 22,000 EU financial entities to a single ICT risk management framework with penalties of up to 2% of global turnover. The EU AI Act adds another layer, with compliance costs for SMEs running between €50,000 and €500,000 per organisation depending on use case. What the partnership delivers Under the agreement, Axipro sells, implements, and operates Kertos for customers as part of integrated service packages. The same partner that scopes the gap assessment, defines the control framework, and runs the implementation also configures and operates the platform that holds the evidence. Engagements no longer hand off between separate vendors. For Kertos, the deal gives the platform deeper exposure to how compliance programmes run inside operating businesses, feeding back into product development. For Axipro, which already supports companies across more than 20 frameworks with services spanning penetration testing, internal audit, and end-to-end certification support, Kertos extends its offering with continuous evidence collection, control management, vendor management, and automated audit preparation. “Our ambition at Kertos is to build the leading compliance automation platform in the market, one that doesn’t just simplify compliance but fundamentally redefines how companies achieve and maintain it,” said Dr. Kilian Schmidt, CEO of Kertos. “Strategic partnerships like the one with Axipro are a key part of that journey. By working closely with experienced compliance experts, we gain invaluable real-world insights that directly shape and accelerate our product development.” Free migration to Kertos through Axipro As part of the partnership, Axipro is offering free migration to Kertos for companies currently using another compliance or GRC platform. The migration covers transferring existing controls, evidence, policies, and vendor records into Kertos, with Axipro consultants handling the rebuild of framework mappings for ISO 27001, SOC 2, GDPR, NIS2, and other applicable standards. The aim is to remove the cost and disruption that typically deters companies from switching platforms mid-program, even when their existing tooling no longer fits their regulatory scope.   DACH region as the starting point Germany consistently leads European GRC adoption and accounts for the largest share of the region’s GRC platform market. It is also where regulatory pressure is sharpest right now, with the Federal Office for Information Security actively building out supervisory capacity ahead of the April 2026 NIS2 registration deadline for essential and important entities. “Compliance is only as strong as the tools and partners behind it,” said Ali Hayat, CEO of Axipro. “Our partnership with Kertos gives our clients in the DACH region access to a powerful data privacy and compliance platform, backed by Axipro’s hands-on expertise. Together, we make achieving and maintaining compliance seamless, faster, and more predictable for the businesses that need it most.” Both companies framed the agreement as a foundation for deeper collaboration as customer needs and regulatory requirements continue to evolve. About Axipro Axipro is a cybersecurity and compliance consulting firm helping high-growth companies achieve and maintain regulatory certifications across more than 20 frameworks including SOC 2, ISO 27001, GDPR, and NIST. Services span penetration testing, internal audit, and end-to-end support for companies pursuing first-time certification or maintaining existing ones. Axipro has offices in the UK, the USA, and Bahrain. About Kertos Kertos is a compliance automation platform that helps companies operating in Europe meet and maintain compliance requirements for frameworks including ISO 27001, SOC 2, GDPR, and NIS2. By automating evidence collection, control management, vendor management, and audit preparation, Kertos enables organisations to build and maintain robust information security and data protection programmes without the manual overhead of traditional approaches. Read the full press release here

When Abeera Zainab joined Axipro in early 2024, she quickly became more than just part of the delivery team—she became a driving force behind how compliance engagements are executed across the firm.Over the past few years, her role has naturally expanded. What began as hands-on involvement in compliance delivery has evolved into leading complex, multi-framework programs across diverse client environments. Today, Abeera operates at the centre of Axipro’s GRC function—overseeing engagements that span ISO 27001, ISO 27701, SOC 2, PCI DSS, GDPR, HIPAA, ISO 42001, and DORA, often managing multiple frameworks simultaneously within a single scope.   Her strength lies not just in understanding these standards, but in making them work together—bringing structure to complexity and helping organisations move toward audit readiness without unnecessary friction. This approach has translated into tangible results. Abeera has played a key role in maintaining Axipro’s 100% audit success rate across 40+ certified clients, with no failed audits to date, while consistently delivering a high level of client satisfaction.But what clients often highlight most isn’t just the outcome—it’s the experience of working with her. Even in high-pressure situations—tight timelines, evolving scopes, or complex stakeholder environments—Abeera is known for her calm, structured, and transparent approach. She brings clarity where there is uncertainty, keeps engagements on track, and ensures that teams remain aligned from kickoff through to certification.   Her technical depth supports this delivery. Abeera holds the ISO/IEC 27001:2022 Lead Auditor certification (CQI/IRCA), the ISO/IEC 42001:2023 Lead Auditor certification, and the Drata Fundamentals Certification. Combined with over 3+ years of hands-on GRC experience, she brings both credibility and practical insight to every engagement. As GRC Lead, her focus extends beyond individual projects. She takes ownership of delivery quality, contributes to the evolution of Axipro’s advisory methodology, and actively supports the development of the wider team. Her role sits at the intersection of execution and strategy—ensuring that every engagement not only meets compliance requirements but also strengthens the client’s overall security and governance posture. At her core, Abeera’s work is about more than passing audits. It’s about building confidence—within client organisations, within delivery teams, and within the systems that support them.And that’s what makes her a trusted advisor in an increasingly complex compliance landscape.

On April 19, 2026, Vercel confirmed attackers had reached parts of its internal systems. The entry point was an infostealer infection on an employee’s laptop at Context.ai, a third-party AI platform, two months earlier. From that single compromised machine, an attacker moved through Google Workspace OAuth, into a Vercel employee’s account, and then into Vercel environments where customer environment variables were stored. This is the shape of a modern supply-chain breach, and it is worth understanding in detail. What Vercel Has Confirmed Vercel published a short security bulletin on April 19, 2026, stating that unauthorized access had affected a limited subset of customers. The company engaged external incident response experts and notified law enforcement. Hours later, CEO Guillermo Rauch provided the attack chain on X: Context.ai was breached, a Vercel employee’s Google Workspace account was taken over through that breach, and the attacker then pivoted into Vercel’s internal environments. Incident responders from Mandiant were engaged alongside law enforcement, according to BleepingComputer’s reporting on the incident. Rauch stated that Next.js, Turbopack, and Vercel’s open-source projects had been audited and remained safe, a direct response to claims circulating on a cybercrime forum that framed the incident as a potential Next.js supply-chain disaster. All core services, including deployments, the edge network, and the dashboard, continued to operate normally throughout the investigation. In the days following the disclosure, Vercel also rolled out dashboard updates including an environment variable overview page and an improved UI for creating and managing sensitive variables. The number of customers directly contacted has not been published, but Vercel has described the impact as quite limited. Customers not contacted have been told there is no current evidence their credentials or personal data were compromised. The Initial Access: A Context.ai Infostealer Infection According to cybercrime intelligence researchers, the likely origin of the breach was a Lumma infostealer infection on a Context.ai employee’s machine in February 2026, a full two months before Vercel’s public disclosure. Browser artifacts from the compromised device tell a familiar story: the user had been searching for and downloading Roblox auto-farm scripts and game exploit executors, a well-documented vector for Lumma stealer deployment. The stealer would have exfiltrated browser credentials, session cookies, and OAuth tokens. Context.ai is an enterprise AI platform that builds agents on top of a customer’s institutional knowledge. To function, it integrates with Google Workspace and requests deployment-level OAuth scopes. As reported in detail by The Hacker News, once Context.ai’s credentials were in the hands of an attacker, that OAuth integration became a privileged foothold into any organization using the platform. Vercel’s investigation noted that the Context.ai OAuth app compromise potentially affected hundreds of users across many organizations, which makes the Vercel intrusion one downstream consequence of a broader supply-chain incident rather than a self-contained breach. The attacker used the compromised integration to take over a Vercel employee’s Google Workspace account. From there, they pivoted into Vercel’s environment and began enumerating environment variables. Vercel offers customers the option to mark environment variables as sensitive, which encrypts them at rest and blocks them from appearing in the dashboard UI. Variables not marked sensitive were readable, and the attacker used that enumeration to extend access further. Who Was Affected and What Was Accessed Confirmed impact is narrower than the headlines suggest. Vercel has stated that customer environment variables marked as sensitive remain encrypted at rest and show no evidence of access. The attacker did read environment variables not marked sensitive, and used those values for further escalation. Secondary reporting indicates that Vercel’s Linear and GitHub integrations bore the brunt of the attack. The attacker demonstrated detailed knowledge of Vercel’s internal systems and moved with high operational velocity, behavior that led Vercel to classify them as highly sophisticated. Whether any customer-owned repositories were accessed through these integrations has not been publicly established. Separately, a threat actor using the ShinyHunters moniker listed what they described as Vercel internal data on BreachForums for USD 2 million, claiming to offer employee accounts, deployment access, source code, database content, GitHub tokens, and npm tokens. The same actor separately communicated a USD 2 million ransom demand via Telegram. Vercel has not confirmed any of these specifics, and Rauch’s public rebuttal focused on the claim that Next.js and related OSS release paths were compromised, which Vercel says they are not. Adding a further layer of doubt, members of the actual ShinyHunters group denied involvement when contacted by BleepingComputer, suggesting the listing may be a copycat or lone-actor operation trading on the group’s reputation. Important: Treat the ShinyHunters listing as plausible but unverified. Plan your remediation against the confirmed scope, which is already broad enough to justify rotating Vercel-connected secrets, but do not quote forum claims to regulators, customers, or auditors as established fact. Indicators of Compromise Vercel published an OAuth application identifier tied to the Context.ai integration that Google Workspace administrators should search for in their own tenant: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com If that client ID appears in your Google Workspace OAuth app inventory, a Context.ai integration exists or existed within your environment. The presence of the integration is not proof your tenant was accessed, but it moves you into the population that needs closer triage. Review the OAuth grant scopes, any activity from the associated service account, and the audit logs for any user who authorized the application. Vercel has also contacted affected customers individually. If you have not received direct outreach, Vercel’s public position is that there is no present evidence your Vercel credentials were compromised. What Vercel Customers Should Do Now Rotate all non-sensitive environment variables across every Vercel project. Anything that is a secret — API keys, database credentials, signing keys, webhook secrets, third-party tokens — should be stored using the sensitive environment variable feature going forward. Rotate any such value that was stored as non-sensitive before April 19, 2026, on the assumption it may have been read. Audit your Vercel activity logs for the period of April 17 through 19, 2026. Unexpected logins, environment variable reads, integration authorizations, or administrative actions during

A new version of the world’s most widely adopted quality management standard is on the way. The Draft International Standard (ISO/DIS 9001) was released on 27 August 2025, and ISO member bodies voted to approve it in December 2025. Final publication is targeted for September 2026, with a three-year transition window expected to follow. Over 1.3 million organizations worldwide currently hold ISO 9001 certification. For every one of them, understanding what is changing, and what is not, matters. This guide covers the confirmed changes in the DIS, the full revision timeline, what the update means for currently certified organizations, and how to plan your transition. Whether you are managing an existing Quality Management System (QMS) or considering certification for the first time, this is what you need to know. What Is ISO 9001:2026? ISO 9001 is the international standard that defines requirements for a Quality Management System. Published by the International Organization for Standardization (ISO), it provides a framework organizations can use to consistently deliver products and services that meet customer and regulatory requirements, and to drive continual improvement. Certification to ISO 9001 is recognized in virtually every industry and country worldwide. ISO 9001:2026 is the sixth edition of the standard. It succeeds ISO 9001:2015 and is being developed by ISO/TC 176/SC 2, the technical subcommittee responsible for quality management system standards. The revision is being drafted by Working Group 29 (WG 29), a body of international experts convened specifically for this purpose. Why Is ISO 9001:2015 Being Revised? ISO standards undergo a formal review cycle every five years. Member bodies assess whether a standard remains relevant, needs updating, or should be discontinued. After a 2020 user survey led the committee to confirm ISO 9001:2015 without revision, a 2023 re-evaluation by a new task force reversed that decision. The conclusion: the world had changed enough since 2015 to warrant an update. Three broad forces are driving the revision. The first is sustainability and climate change. ISO formally amended ISO 9001:2015 in February 2024, requiring organizations to consider climate change as part of their context analysis. That amendment is now being embedded directly into the body of the 2026 standard. The second is digital transformation. Since 2015, AI, IoT, cloud computing, and remote auditing have moved from emerging technologies to standard business practice. The standard needs to reflect that reality. The third is stakeholder expectations. Customers, employees, suppliers, and communities now expect organizations to operate transparently and ethically, not just efficiently. The revision also reflects feedback from quality practitioners globally, who found certain parts of the 2015 standard, particularly the treatment of risks and opportunities, unclear in practice. Pro Tip: EU and UK Customers If your EU or UK customers ask for “an ISAE 3000 report” without specifying the assurance level, clarify upfront. A limited assurance engagement involves materially less testing and a lower fee, but some enterprise buyers will only accept reasonable assurance. Getting alignment early saves weeks of rework. Current Status of the ISO 9001:2026 Revision Draft International Standard (DIS) The DIS was published on 27 August 2025, marking the first time the revised text was available to ISO member bodies for formal review and ballot. The voting period closed on 4 December 2025, with member countries approving the proposal. That approval is a significant milestone: it confirms the standard will be published and locks in the broad direction of the changes, though minor editorial refinements are still possible before final publication. The DIS itself is not freely available, but its content has been widely discussed by national body experts, certification bodies such as DNV and Intertek, and quality management organizations globally. The picture of what is changing is now clear. Final Draft International Standard (FDIS) Following DIS approval, the working group addresses submitted comments before preparing the Final Draft International Standard (FDIS), expected in early 2026. This is typically a near-final text, with only minor adjustments possible at this stage. Once the FDIS is approved, the standard moves directly to publication. ISO 9001:2026 Publication and Transition Timeline Publication is targeted for September 2026. Following publication, the International Accreditation Forum (IAF) will establish the official transition timeline and accreditation requirements for certification bodies. Important: The IAF has not yet formally confirmed the transition period. Based on precedent with previous major revisions, a three-year window is expected. Do not finalize your planning around any specific deadline until the IAF publishes its official transition rules after the standard is published. Key Changes in ISO 9001:2026 The DIS confirms that ISO 9001:2026 is an evolutionary update, not a rebuild. The core requirements in Clauses 4 through 10 have changed modestly. The most significant additions appear in the non-mandatory Annex A, which has been substantially expanded to provide clearer implementation guidance. For organizations currently certified to ISO 9001:2015, the transition burden is expected to be manageable. Ethics and Integrity Within Leadership Clause 5.1.1 now explicitly requires top management to promote and demonstrate a culture of quality and ethical behavior. Previous editions required leadership commitment to the QMS, but the 2026 version makes quality culture and ethical conduct formal leadership responsibilities,  not just implied expectations. Clause 7.3 adds a corresponding requirement at the workforce level: employees must be aware of what quality culture and ethical behavior mean in their context. This pairs leadership obligation with organizational awareness, creating accountability at both ends of the organization. Enhanced and Restructured Risk Management Risk-based thinking has been part of ISO 9001 since 2015, but practitioners consistently reported that the standard did not give enough guidance on how to handle risks and opportunities differently. The 2026 revision addresses this directly. Clause 6.1 is restructured into sub-sections: 6.1.2 for actions to address risks, and 6.1.3 for actions to address opportunities. This is not just editorial. The separation forces organizations to treat opportunity management as a distinct planning activity, not simply the positive counterpart to risk. Many organizations with mature QMS processes had already made this distinction informally,  the standard now makes it explicit. Greater Emphasis on Stakeholder Engagement

Axipro has appointed Ikponke Godwin, CISM, as Principal Advisor. She joins from EY and brings a profile that is genuinely rare in this space: deep technical security experience and mature governance expertise, built in parallel rather than one after the other. Ikponke spent over a year at EY as Senior Cybersecurity Consultant for West Africa, advising enterprise clients across security operations, GRC, and digital transformation. Before that, nearly two years at PwC Nigeria as both Associate Cybersecurity Specialist and Penetration Tester, where she worked across vulnerability assessment, framework implementation, and technical risk analysis. Most recently, she completed a secondment at Flutterwave as GRC Analyst, where she led an enterprise gap assessment using NIST CSF 2.0, co-developed a risk-based remediation roadmap, built out risk taxonomies, and used Drata to automate compliance workflows across one of Africa’s most closely-watched fintech platforms. That combination of offensive security work and governance programme delivery is what makes her appointment significant. Many practitioners develop in one direction or the other. Ikponke has done both, and the combination shapes how she approaches client problems: starting with what could actually go wrong, not just what the framework says should be documented.   She holds the Certified Information Security Manager (CISM) designation and has hands-on experience across penetration testing, SIEM operations, third-party risk management, and security policy development. She has also worked as a Cybersecurity Instructor, which speaks to how clearly she can communicate complex topics to teams who are not security specialists. Ikponke on joining Axipro: “What drew me here is the combination of ambition and precision. The firm is growing fast but has not traded rigour for speed. I want to build on that.” As Principal Advisor, Ikponke will work with enterprise and mid-market clients on complex compliance engagements, technical security assessments, and GRC transformation programmes. She is based in Lagos, Nigeria. We’re glad to have her.

Most organisations that fail their first ISO 27001 certification audit don’t fail because their security is lacking. They fail because they lack a systemic approach to their IT systems. ISO 27001:2022 is not a technology exercise. It is a governance framework, and getting certified requires your entire organisation to demonstrate that it manages information security systematically, continuously, and with documented intent. This guide provides a practical, phase-by-phase roadmap to ISO 27001 implementation, covering everything from initial scoping to certification audit preparation. Whether you are building an ISMS from scratch or modernizing a legacy system, the structure below reflects how implementation actually works in practice. The ISO 27001 Implementation Roadmap at a Glance An ISO 27001 implementation roadmap is a structured project plan that takes an organization from its current security posture to certified compliance with ISO/IEC 27001:2022. The roadmap defines phases, deliverables, roles, and timelines, giving your team a clear line of sight from day one through to the certification audit. The standard itself has two components. Clauses 4 through 10 define the mandatory management system requirements: context, leadership, planning, support, operations, performance evaluation, and improvement. Annex A provides a reference catalogue of 93 security controls, organised into four themes: organisational (37 controls), people (8 controls), physical (14 controls), and technological (34 controls). A well-structured roadmap addresses both components in a logical sequence, with risk driving every decision. Pro Tip: What Procurement Teams Actually Accept In our experience at Axipro, most sophisticated procurement teams care about three things: (1) that an independent auditor tested your controls, (2) that the criteria used are recognised and rigorous, and (3) that the report covers a recent period (ideally the last 12 months). Whether the cover page says “SOC 2” or “ISAE 3000” matters less than you think, unless the policy explicitly mandates one or the other. Always ask. Prerequisites and Planning Before You Start Define the Scope of Your ISMS Scope definition is the single most consequential decision in the entire implementation. The scope should reflect the business units, locations, processes, and information assets that are most critical to your organization and most relevant to your customers and stakeholders. A well-defined scope document should identify the boundaries of the ISMS, the interfaces and dependencies with external parties, and any intentional exclusions, with justification for each. Auditors scrutinize scope boundaries carefully. Any exclusion that appears to cherry-pick convenient systems will attract challenge. Form Your ISO 27001 Implementation Team Three roles are non-negotiable: an executive sponsor with authority to allocate resources and enforce decisions; a project manager who owns the day-to-day implementation timeline; and an information security lead who understands both the technical controls and the documentation requirements. Larger organisations may also need departmental representatives from IT, HR, legal, and operations. The most common implementation failure mode is assigning ISO 27001 entirely to the IT team. The standard requires evidence that security is embedded across the organisation. HR owns the people controls. Legal owns the contractual and regulatory requirements. Finance owns the asset valuation. If those functions are not engaged early, you will discover gaps at the worst possible time. If your organisation lacks in-house expertise, working with an experienced ISO 27001 consultant can bridge that gap efficiently. ISO 27001 Implementation Roadmap: Phase-by-Phase Breakdown Phase 1 (2 weeks): Foundation and Planning Phase The first 14 days establish the governance foundation. Key deliverables include a documented ISMS scope; an approved information security policy signed by top management; a defined organisational context covering internal and external issues, interested parties, and legal requirements; and a completed gap assessment that maps your current state against the standard’s requirements. From this list, the gap assessment is the most important document. It identifies which controls are already in place, which need to be built from scratch, and which exist informally but require documentation. Our gap analysis services are designed specifically for this phase, helping organisations cut through the ambiguity and get a clear remediation picture fast.  Phase 2 (2 weeks): Implementation Phase The second 14 days focus on risk and documentation. Your team completes the formal risk assessment, identifies and values assets, maps threats and vulnerabilities, and determines risk levels against your defined risk appetite. From this, you produce a Risk Treatment Plan that specifies which risks will be mitigated, accepted, transferred, or avoided, and which Annex A controls address each risk. The Statement of Applicability (SoA) is produced during this phase. It documents all 93 Annex A controls, the justification for including or excluding each one, and the current implementation status. The SoA is typically the first document an auditor requests. It connects your risk assessment to your control selection and demonstrates that your ISMS is risk-driven rather than checklist-driven. Phase 3 (1 to 3 weeks): Audit and Approval The final phase focuses on executing the controls, training staff, and preparing for audit. Technical controls from the risk treatment plan are deployed. Operational procedures are finalised and approved. Security awareness training is delivered to all staff. An ISO 27001 internal audit is conducted to identify nonconformities before the certification body arrives. A management review is completed to demonstrate leadership engagement. This 6-week timeline is achievable for most organizations with existing security foundations and dedicated implementation resources. Rushing the process to meet an arbitrary deadline is the leading cause of audit failures and certification theatre, a situation where documented controls exist only on paper and fall apart under auditor questioning. For a detailed breakdown of where implementations go wrong, see our guide on common pitfalls in ISO 27001. 6-Week Detailed Implementation Timeline Week 1: Project Initiation Secure executive sponsorship in writing. Establish the project team and define roles. Brief key stakeholders on the standard’s requirements and business case. Set up project governance, including a steering committee and regular status reporting. Week 2: Define ISMS Scope and Context and Conduct Gap Assessment Document the organisational context using Clause 4 requirements. Identify interested parties and their requirements. Define and document the ISMS scope boundary. Obtain approval from top management. Assess current security controls

EORs are often the leaders in data security compliance. As the responsible party for payroll and HR data, the burden of SOC 2 compliance is greater for them than for other companies. But SOC 2 compliance doesn’t have to be complicated. In this article, we’ll guide EOR firms through the process with an easy, step-by-step approach. What Is SOC 2 Compliance and Why Does It Matter for EOR Providers? Understanding SOC 2 and Its Role in Employer of Record Services An Employer of Record processes payroll data, national identification numbers, bank account details, tax filings, and employment records for workers across dozens of countries. In a single month, a mid-sized EOR platform may handle more sensitive personal data than many healthcare organisations. That concentration of risk is precisely why SOC 2 compliance has moved from a nice-to-have to a procurement prerequisite for clients who take data security seriously. SOC 2 is a security auditing framework developed by the American Institute of Certified Public Accountants (AICPA). It evaluates service organisations against a set of Trust Services Criteria covering security, availability, processing integrity, confidentiality, and privacy. Unlike prescriptive frameworks such as PCI DSS, SOC 2 does not mandate a specific list of controls. Instead, it requires organisations to demonstrate that the controls they have designed and implemented actually work. For EOR providers, this flexibility is both useful and demanding. Useful because it allows controls to be tailored to the specific realities of multi-country payroll operations. Demanding because evidence of effective control operation must be documented and sustained continuously — not assembled in the weeks before an audit. Why EOR Providers Are High-Value Targets for Data Security Risks EOR platforms sit at a uniquely dangerous intersection of data sensitivity, operational scale, and third-party dependency. They act as the legal employer in multiple jurisdictions, which means they hold the kind of data that attracts two distinct threats: financially motivated attackers looking for payroll and banking credentials, and regulatory enforcement bodies scrutinising how personal data crosses borders. The attack surface is broad. EOR providers connect client company HR systems to local payroll engines, tax authorities, benefits administrators, and banking rails. Each integration is a potential entry point. A misconfigured API between an EOR platform and a client HRIS can expose employee records without any external attacker involved at all. The regulatory exposure compounds the security risk. Under the GDPR alone, penalties for serious data breaches can reach €20 million or 4% of global annual turnover, whichever is higher. For an EOR operating in Europe, Southeast Asia, and Latin America simultaneously, the regulatory surface is enormous. The Business Case for SOC 2 Compliance in the EOR Industry Enterprise clients and their procurement teams increasingly require SOC 2 Type II certification before signing EOR contracts. A successful audit signals that an EOR provider has implemented and sustained effective security controls over time — not just designed them on paper. That distinction matters enormously in a market where a single data breach can destroy client relationships overnight. SOC 2 compliance also de-risks the EOR provider itself. Organisations that have gone through the audit process typically discover and remediate control gaps they did not know existed. The internal discipline required to sustain a Type II audit programme produces a more operationally mature organisation, regardless of what any individual client requires. Pro Tip: Type 1 vs Type 2 In the EOR market, SOC 2 Type II has become the de facto security signal that enterprise procurement teams look for when vetting providers. Type I is no longer sufficient for most Fortune 1000 clients. If an EOR is starting the compliance journey today, the goal should be Type II from the outset. Which Trust Services Criteria Apply to EOR Providers? Security (Common Criteria) Security is the only mandatory Trust Services Criterion in a SOC 2 audit. It covers nine areas of control (CC1 through CC9) grounded in the COSO framework, spanning governance, risk management, access controls, system operations, change management, and incident response. For EOR providers, the security criterion is the foundation on which everything else sits. Access control is particularly critical. EOR platforms grant dozens or hundreds of internal staff access to employee PII and payroll data, often differentiated by country and client. Multi-factor authentication, role-based access, and rigorous user provisioning and deprovisioning processes are baseline expectations for any SOC 2 auditor. Availability Availability assesses whether systems perform as expected and are accessible to users when required. For EOR providers, payroll processing is time-critical. A system outage on a payroll run date does not just affect internal operations — it directly impacts employees’ ability to receive pay on time, which creates legal exposure in many jurisdictions. Availability controls for EOR providers should address capacity planning, disaster recovery, and system resilience. Demonstrable recovery time objectives and tested business continuity plans are the evidence auditors will want to see. Confidentiality Confidentiality applies to any information designated as confidential within the system, including client business information, employment contracts, salary benchmarking data, and any other data the EOR has committed to protect beyond basic legal requirements. It requires both clear data classification processes and active controls to prevent unauthorised disclosure. EOR providers often hold confidential commercial information on behalf of multiple clients who may be competitors of one another. Logical segregation of client data is therefore not only a security best practice but a direct requirement under the confidentiality criterion. Processing Integrity Processing integrity evaluates whether systems process data completely, accurately, in a timely fashion, and without unauthorised modification. This criterion is particularly relevant to payroll operations, where a calculation error can result in incorrect tax remittances, underpaid employees, or regulatory violations. Input validation controls, reconciliation procedures, and audit trails that confirm payroll data moved accurately from source to payment are the core of a processing integrity programme for EOR platforms. Privacy Privacy goes beyond confidentiality to address how personal data is collected, stored, used, retained, and disclosed in line with the AICPA’s Generally Accepted Privacy Principles. It applies when an organisation collects

ISO 27001 does not use the words “penetration test” anywhere. And yet, auditors conducting Stage 2 assessments routinely expect to see one.  Understanding why that gap exists, and how to close it, is what separates organizations that sail through ISO 27001 certification from those that get caught off-guard. This guide covers what the standard actually says about security testing, which controls drive the expectation for penetration testing, what types of testing are relevant, and how to build a testing programme that genuinely supports your ISMS rather than simply ticking a compliance box. What Is Penetration Testing in the context of ISO 27001? ISO 27001 penetration testing refers to structured, simulated attacks conducted against an organization’s systems, networks, and applications in order to identify exploitable vulnerabilities before real attackers do. In the context of ISO 27001, it serves a specific purpose: providing evidence that the technical controls underpinning your Information Security Management System (ISMS) actually work under real-world conditions. The distinction matters. A vulnerability scan tells you what weaknesses exist whilst a penetration test tells you whether those weaknesses are exploitable, to what degree, and with what consequence. That difference is exactly what auditors are looking for when they ask for testing evidence. Penetration testing is not an isolated activity in an ISO 27001 programme. Its findings feed directly into three of the most scrutinised documents in your ISMS: the risk register, the risk treatment plan, and the Statement of Applicability (SoA). A risk listed in your register as “medium” looks very different once a tester has demonstrated they can chain it into a full domain compromise. Is Penetration Testing a Requirement for ISO 27001? No, it is not explicitly required. The standard does not mandate it by name. What ISO 27001 does require is that organisations establish and maintain a functioning ISMS, perform systematic risk assessments (Clause 6.1.2), implement appropriate controls (Clause 8), evaluate the performance and effectiveness of those controls (Clause 9), and pursue continual improvement (Clause 10). Vulnerability assessment and penetration testing supports every one of those activities with hard evidence. Two Annex A controls make it practically impossible to demonstrate compliance without some form of penetration testing: A.8.8 (Management of Technical Vulnerabilities) and A.8.29 (Security Testing in Development and Acceptance). Auditors conducting Stage 2 assessments will expect to see testing evidence mapped to both. Organisations that substitute a vulnerability scan report and call it done regularly receive non-conformances. The absence of an explicit penetration testing requirement is sometimes misread as permission to skip it. In practice, certified auditors universally expect evidence of testing that goes beyond automated scanning. Relying solely on scan reports is the fastest route to a failed audit. What ISO 27001:2022 Says About Security Testing Annex A 8.29: Security Testing in Development and Acceptance Annex A 8.29 requires organisations to define and implement security testing processes throughout the development lifecycle and before final acceptance of any system. This applies to both in-house development and outsourced or third-party software. The control is preventive in nature. Its purpose is to ensure that no application, database, or system goes into production with known, unmitigated vulnerabilities. For in-house development, the standard specifically references conducting code reviews, performing vulnerability scans, and carrying out penetration tests to identify weak coding and design. For outsourced environments, organisations must set contractual requirements that ensure suppliers meet equivalent security testing standards, accepting a supplier’s assurance without evidence is not sufficient. Annex A 8.29 does not prescribe specific tools or techniques. What it demands is that testing is risk-based, documented, and proportionate to the sensitivity and exposure of the system. A low-risk internal tool used by five people warrants a different level of scrutiny than a customer-facing payment platform. Security testing should scale with risk, and it should happen throughout development, not only at the end. Worth knowing: Annex A 8.29 consolidates two controls from ISO 27001:2013, specifically A.14.2.8 (System security testing) and A.14.2.9 (System acceptance testing), into a single, clearer requirement. The 2022 version makes the expectation of penetration testing more explicit, particularly for major releases and architectural changes. Auditors will ask to see signed penetration test reports or independent security audit summaries for recent major system updates. If such evidence does not exist, they have grounds to mark the control as non-compliant. Annex A 8.8: Management of Technical Vulnerabilities Annex A 8.8 is the vulnerability management control. It requires organisations to identify, assess, and address technical vulnerabilities in a timely manner, taking a proactive and risk-based approach rather than reacting only when something breaks. Crucially, the control explicitly lists periodic, documented penetration tests, conducted either by internal staff or by a qualified third party, as a method for identifying vulnerabilities. Automated scanners have their place, but penetration tests are recognised here as the mechanism for discovering high-risk weaknesses that scanners routinely miss: logic flaws, chained vulnerabilities, privilege escalation paths, and misconfigurations that only become dangerous in combination. Annex A 8.8 replaces two controls from ISO 27001:2013: A.12.6.1 (Technical vulnerability management) and A.18.2.3 (Technical compliance review). The 2022 version introduces a broader, more holistic approach, including the organisation’s public responsibilities, the role of cloud providers, and the expectation that vulnerability management is integrated with change management rather than treated as a separate activity. The Role of Penetration Testing in ISO 27001 Compliance Risk Assessment and Treatment ISO 27001’s risk-based model sits at the core of everything. Penetration testing feeds that model with real-world evidence rather than hypothetical assumptions. When a tester demonstrates that an attacker can move laterally from a compromised workstation to a production database in four steps, that finding transforms what was previously a theoretical risk into a documented, evidenced vulnerability with a severity rating, an exploitability score, and a required remediation action. This evidence directly informs how risks are treated. ISO 27001 requires organisations to choose one of four treatment options for each risk: mitigate, accept, avoid, or transfer. Without penetration test data, those decisions rest on estimation. With it, they rest on proof. If you haven’t yet mapped

In March 2026, a regional conflict in the Middle East did something that stress tests and tabletop exercises rarely manage to do: it took down cloud infrastructure across multiple availability zones at the same time, in the same region, without warning. AWS data centers in the UAE and Bahrain were impacted. Banking apps went offline. Payments failed. Delivery platforms stopped. And a significant portion of the affected organizations had done everything “right” by conventional standards — multi-AZ deployments, redundancy within the region, documented continuity plans. It wasn’t enough. This article breaks down what happened, what it revealed about how most organizations think about availability, and what a more resilient architecture actually looks like. If your systems run on cloud infrastructure — in any region — this case is worth understanding closely. What Happened: The March 2026 Incident Regional conflict in the Middle East caused physical and infrastructural disruption to AWS facilities across the UAE and Bahrain. Based on publicly reported information, the incident involved power outages affecting data center operations, physical damage to infrastructure facilities, connectivity loss across affected environments, and service degradation spanning multiple availability zones within the same region — simultaneously. That last point is the one that matters most. AWS designs its availability zones to be isolated from one another — separate power, cooling, and networking — so that a failure in one zone doesn’t cascade into another. Under normal failure conditions, that isolation holds. But this wasn’t a normal failure condition. It was a regional-scale disruption. The “rooms” were fine. The “building” was the problem. “Availability zones are designed to handle localized failures, not regional ones. This incident sits firmly in the second category.” The result was that organizations with multi-AZ architectures — which many rightly considered robust — still went down. There was no in-region fallback left to use. Business Impact: What Actually Went Offline The impact was not subtle. Banking platforms experienced downtime that prevented customers from accessing accounts or completing transactions. Payment processors were unable to process transactions. Mobility and delivery platforms halted operations entirely. Customer-facing applications became unavailable across the board. This wasn’t degraded performance or slower load times. It was a full loss of availability for any system that lived entirely within the affected region. The AWS Well-Architected Framework acknowledges that regional failures, while rare, are a defined risk category — and designing for them requires a fundamentally different approach than designing for AZ failures. Organizations with multi-region architectures kept operating. Everything else stopped. That single architectural decision — single-region versus multi-region — was the difference between availability and a complete outage. What Risks Actually Materialised This incident didn’t create new risks. It exposed ones that were already there, quietly embedded in architectural choices and compliance assumptions that had never been stress-tested at this scale. Regional Single Point of Failure The most common pattern among affected organizations: applications, databases, and backups all deployed within a single region. When that region became unavailable, there was no secondary environment to take over. No warm standby, no traffic rerouting, no automated failover. Just downtime. This is the architectural equivalent of backing up your data to a drive sitting next to your laptop. It works until it doesn’t. The Limits of Availability Zone Redundancy Availability zones are a powerful tool — but they’re a tool designed for a specific class of failure, and understanding that class matters. Think of an availability zone as a separate floor in a building. If one floor has a problem, you move to another floor. But if the entire building loses power — or becomes inaccessible — floor redundancy doesn’t help. You needed another building entirely. That’s what a region is. And this incident took down the building. Pro tip: When mapping your architecture against a business continuity plan, explicitly define your regional failure scenario. “What happens if this entire region becomes inaccessible for 24 hours?” is a question that exposes gaps that AZ-level planning will never catch. Infrastructure-Level Disruption Is Not Solvable at the Application Layer Power outages. Connectivity loss. Physical damage. These are not conditions that clever application architecture can work around if your infrastructure is entirely contained within the affected geography. No amount of microservices design, caching strategy, or auto-scaling helps when there’s no power reaching the data center. This is an important framing shift for engineering teams who own availability: some failure modes require infrastructure-layer responses, not code-layer ones. The Compliance Gap: Controls on Paper vs. Controls in Practice Perhaps the most uncomfortable implication of this incident. In many environments — particularly those undergoing ISO/IEC 27001:2022 certification or SOC 2 audits — availability controls are documented but don’t reflect the actual system architecture. Redundancy is listed as a control. It’s just redundancy within a single region, which, as this event demonstrated, is insufficient for regional-scale disruptions. The control passes an audit. It fails a real incident. This is the exact gap that compliance frameworks are designed to close — and that audit processes sometimes fail to catch. Cloud Hosting and SOC 2 Compliance Requirements Choosing AWS or Azure doesn’t hand you a SOC 2 compliance. It hands you a shared responsibility model, which means your provider secures the physical infrastructure and you secure everything running on top of it — including whether your architecture can actually deliver on your availability commitments. Auditors know this distinction well. When they evaluate your Availability criteria, they’re looking at your controls, not your provider’s SOC 2 report. What that means in practice: your recovery objectives need to be real numbers tied to a real architecture, not placeholders in a policy document. Your failover plan needs test records behind it. And your cloud provider should appear in your vendor risk register with an annual review of their own audit reports. A single-region deployment with no tested failover isn’t compliant in any meaningful sense. It’s a documentation exercise waiting to be disproved. The March 2026 incident made this concrete. Organizations that had documented availability controls but confined their entire infrastructure to

Around the year 2019, The DoD found a problem. Contractors were self-attesting to NIST SP 800-171 compliance, signing off on security postures that, in many cases, existed only on paper. Sensitive defense information was leaving the supply chain through vulnerabilities that everyone had technically promised to close. That failure gave rise to CMMC, and understanding how these two frameworks relate, where they overlap, and where they diverge is now a contractual necessity for every organization in the Defense Industrial Base. This guide cuts through the confusion and provides a precise, current account of how CMMC 2.0 and NIST SP 800-171 compare and coexist. What Is NIST SP 800-171? NIST Special Publication 800-171 is a set of cybersecurity requirements developed by the National Institute of Standards and Technology for the protection of Controlled Unclassified Information (CUI) in non-federal information systems and organizations. It was first published in 2015 and most recently updated with Revision 3 in May 2024. The framework covers 14 families of security requirements in its current Revision 2 form, spanning access control, audit and accountability, incident response, configuration management, identification and authentication, and more. Revision 3 restructures this into 17 families, reducing the number of top-level requirements from 110 to 97 while introducing three new domains: Planning, System and Services Acquisition, and Supply Chain Risk Management. Do not let the lower requirement count mislead you. According to NIST, Revision 3 increases the number of determination statements, the specific verification actions required during an assessment, by 32 percent. NIST 800-171 is not a certification. It is a compliance standard built on a self-assessment model. Organizations determine their own score, document it in their System Security Plan (SSP), and report it to the DoD’s Supplier Performance Risk System (SPRS). That self-reporting architecture is precisely what CMMC was designed to fix. Worth Knowing: NIST SP 800-171 applies broadly across federal contracting, not just the DoD. Any non-federal organization handling CUI in support of a federal agency, including NASA, GSA, and others, may be required to comply. CMMC, by contrast, is exclusively a DoD program. What Is CMMC 2.0? The Cybersecurity Maturity Model Certification is the Department of Defense’s formal certification program for cybersecurity compliance across the Defense Industrial Base. CMMC 2.0 was finalized in October 2024 and became effective December 16, 2024, with enforcement rolling out in phases through 2028. Where NIST 800-171 describes what security controls an organization should implement, CMMC adds a verification layer: it requires that compliance be independently confirmed before a contract is awarded. CMMC uses a three-level maturity model, with each level corresponding to the sensitivity of the data handled and the rigor of the required assessment. CMMC is enforced through DFARS clause 252.204-7021. Phase 1 of the rollout began November 10, 2025, and the DoD estimates that approximately 65 percent of the Defense Industrial Base will be affected. Major primes including Lockheed Martin and Boeing have already issued directives requiring CMMC documentation from their supply chains, in some cases ahead of official DoD deadlines. How CMMC and NIST 800-171 Connect CMMC 2.0 does not replace NIST 800-171. It is built on top of it. CMMC Level 2, the level most defense contractors will encounter, directly mirrors the 110 requirements in NIST SP 800-171 Revision 2. CMMC Level 3 extends that baseline by adding 24 enhanced requirements drawn from NIST SP 800-172. Think of it this way: NIST 800-171 is the technical standard, and CMMC is the auditing and enforcement mechanism. Implementing 800-171 is a prerequisite for CMMC Level 2 certification. The critical difference is that 800-171 compliance is self-declared, while CMMC compliance is independently verified. Both frameworks require a System Security Plan and a Plan of Action and Milestones (POA&M) for identified gaps. Assessment results from third-party or government-led CMMC assessments are recorded in eMASS, the DoD’s Enterprise Mission Assurance Support Service, while self-assessment results continue to be recorded in SPRS. Key Differences Between CMMC and NIST 800-171   Attribute NIST SP 800-171 CMMC 2.0   Purpose Technical standard for CUI protection Certification program verifying CUI protection Who It Applies To Any non-federal entity handling CUI DoD contractors and subcontractors handling FCI or CUI Maturity Levels None, flat set of 110 requirements Three levels (Foundational, Advanced, Expert) Assessment Model Self-assessment and self-attestation Self-assessment (L1), C3PAO (L2), DIBCAC (L3) Where Results Are Recorded SPRS SPRS (self-assessments), eMASS (C3PAO/DIBCAC) POA&M Restrictions No closure deadline or item limit Limited open items; must close within 180 days Contract Consequence Contractually required; limited enforcement mechanism Required for contract award; False Claims Act exposure Current Revision in Use Rev. 2 (CMMC use); Rev. 3 published May 2024 Aligned to Rev. 2 for Level 2 assessments Cloud Requirements FedRAMP Moderate equivalent minimum FedRAMP Moderate (L2); FedRAMP High (L3) Applies to Non-DoD Agencies? Yes No, DoD only Is Compliance Mandatory? Both frameworks are contractually required for DoD contractors handling CUI through the DFARS 252.204-7012 clause. The critical difference is consequence. NIST 800-171 compliance has been contractually required for years, but the self-attestation model created minimal accountability. CMMC adds teeth: without the required certification level, organizations cannot be awarded or retain DoD contracts. Under the False Claims Act, falsely certifying CMMC compliance can expose both the organization and signing individuals to treble damages. Does It Use a Maturity Model? NIST SP 800-171 does not use a maturity model. It presents a flat set of requirements that either are or are not implemented. CMMC structures compliance into three ascending levels, with each level carrying specific assessment requirements and targeting a different category of sensitive information. Does It Require a Third-Party Assessor? NIST 800-171 is self-assessed. CMMC Level 1 is also self-assessed annually. For CMMC Level 2, the picture is more complex: some contracts allow self-assessment, but most high-priority contracts require assessment by a Certified Third-Party Assessment Organization (C3PAO). CMMC Level 3 requires a direct audit by the Defense Industrial Base Cybersecurity Assessment Center (DIBCAC), a government body. Scope: What Data Does Each Framework Protect? Both frameworks center on CUI protection, but there is an