THE PLAYBOOK · CHAPTER 1
SAP S/4HANA Audit Modernization
Real-time detection using the capabilities your ERP already provides. Fiori configuration, BIS deployment, and a monitoring workflow designed to run without constant analyst intervention.
Key takeaways -- read this first
- S/4HANA's in-memory architecture enables real-time monitoring that was not technically feasible on older ERP platforms. The question is not whether the capability exists -- it is whether audit has activated it.
- Fiori Custom Queries require no additional license and no IT development. They are the fastest path to population-level data for any audit team that has not yet established direct data access.
- BIS is frequently licensed and dormant. Activating it requires IT configuration effort, not procurement. Starting with three to five rules in high-risk areas produces immediate return without the complexity of a full deployment.
- The automation principle applies to all monitoring: automation changes what audit judgment is applied to, not whether it is applied. Every exception that surfaces from a monitoring routine requires human judgment to evaluate, prioritize, and resolve.
What Changes With S/4HANA
Three dimensions of change that directly affect what audit can do and what it needs to do differently.
The Monitoring Workflow
Four steps that define how continuous monitoring operates -- from data access through exception closure.
- Identify the Fiori apps relevant to the target process area -- AP monitoring starts with F0718 (Manage Journal Entries) and the Supplier Invoice List app.
- Confirm read-only authorization: F-KO (display vendor documents), F-LF (display accounts payable), and BSEG/BKPF display access at minimum.
- Validate completeness: run a population query for the last completed period and reconcile the count to the GL account balance. If counts do not reconcile, escalate before building monitoring logic on top of incomplete data.
- Set thresholds based on the prior period distribution of the underlying metric -- not on an arbitrary dollar amount or percentage. A threshold that works for a $50M AP ledger will not work for a $500M one.
- Test the threshold against historical data before going live. Count how many exceptions the rule would have generated in the last six months. If it is more than the team can review in a week, tighten the threshold or add a secondary filter.
- Document the threshold and the rationale in the monitoring program file. Thresholds that are not documented will be forgotten and cannot be defended.
- Assign a primary reviewer and a backup for each monitoring routine. If only one person can review exceptions, the routine stops when that person is unavailable.
- Define the escalation trigger explicitly: what makes an exception a finding rather than a false positive? Document this threshold in the monitoring program file so any reviewer can apply it consistently.
- Set a maximum review cycle length -- typically one to two weeks for most routines. Exceptions that age beyond the review cycle without disposition are a workflow failure, not a finding volume problem.
- Use a consistent format for exception disposition: exception ID, detection date, reviewer, disposition category, disposition date, and notes. A spreadsheet works at early stages. A ticketing system is better as volume grows.
- Review the false positive rate at the end of each quarter. A false positive rate above 40 percent means the threshold needs to be tightened. A rate below 5 percent may mean the threshold is too narrow to catch real exceptions.
- Document findings separately from false positives and use the finding record to feed the annual audit risk assessment. Monitoring findings are evidence of where control gaps actually exist -- not just where they might exist.
Fiori for Audit
The Fiori apps relevant to audit work, what they access, and how to configure a monitoring workspace without IT development.
Fiori is the browser-based interface layer for S/4HANA. For audit, the relevant apps are the ones that provide read access to transaction data -- the AP, GL, purchasing, and inventory apps that surface the population-level data underlying each use case in the audit universe. The key distinction is between analytical apps (which display aggregated data) and factsheet and list apps (which display transaction-level records).
Audit-relevant Fiori apps do not require custom development. They require role configuration -- specifically, the addition of appropriate read-only business roles to the audit user's profile. This is an IT configuration task, not a development task. The conversation with IT is about role assignment, not about building anything new. Most of the apps described below are standard Fiori apps delivered with S/4HANA and activated as part of the standard implementation.
BIS in Practice
Business Integrity Screening -- what it does, how rules are configured, what the exception workflow looks like, and why starting with three to five rules is the right approach.
BIS is SAP's native anomaly detection module. It runs continuously against transaction data using configurable detection rules and machine learning models. Unlike Fiori, which requires an analyst to run a query, BIS runs on a schedule and delivers exceptions to a review queue automatically. The analyst reviews what BIS surfaces rather than running queries to surface it.
BIS is frequently licensed as part of S/4HANA and not activated. Activation requires IT to configure the BIS system connection, define the data scope (which company codes, which transaction types), and set up the alert queue. This is a configuration task measured in days, not a development project measured in months. The question to ask IT is whether BIS is licensed -- not whether it can be configured.
The Automation Principle
Continuous monitoring does not replace audit judgment. It changes what audit judgment is applied to. In a manual testing environment, audit judgment is applied at every step: which vendors to test, which invoices to pull, which transactions to examine. Most of that judgment is spent on selection rather than on evaluation.
In a monitoring environment, the selection problem is addressed by the detection rule. BIS identifies which transactions meet the anomaly criteria. The auditor applies judgment to the exception list: is this a finding? Is it a known condition that has been documented? Is it a false positive that suggests the rule threshold needs adjustment? The judgment is the same quality -- it is applied to a much better-curated set of transactions.
This distinction matters for two reasons. First, it addresses the concern that monitoring is auditing by algorithm -- that it removes professional judgment from the process. It does not. It compresses the time spent on selection and expands the time available for evaluation. Second, it defines what a monitoring program needs to produce: not a list of anomalies, but a list of anomalies that has been reviewed by a person who applied a documented judgment standard to each one.
See where Chapter 1 fits in the full playbook arc and what chapters follow