FOR PRACTITIONERS · STEP 3 OF 5

AI Toolkit for Audit Practitioners

Practical tools for using AI in real audit work -- prompts, patterns, and judgment frameworks you can use today.

Key takeaways -- read this first
  • A prompt library is most useful when it is structured around audit workflow stages rather than abstract AI categories. These prompts are organized by when in an engagement you would use them.
  • The prompts here are starting points. The most valuable version of any prompt is the one you have modified to reflect your specific system environment, audit area, and evidence standard.
  • AI-assisted work requires the same evidence trail as any other audit work. The toolkit section on documentation standards covers the specific habits that keep AI-assisted output defensible.
  • The judgment calls -- significance, materiality, root cause, and finding severity -- remain yours. AI accelerates the work around those calls. It does not make them.
Prompts
20
Filterable by workflow stage
Copy to clipboard
One click
Use directly in any AI tool
Workflow stages
4
Planning · Fieldwork · Docs · Comms
Reading time
15 min

The Prompt Library

Twenty prompts organized by audit workflow stage. Use the filter to find what you need. Modify each prompt to match your specific system environment and audit area.

Risk & Planning
Risk Identification
Use when starting an engagement in an unfamiliar area or when you want a structured risk list to pressure-test your own thinking.
I am planning an internal audit of [describe the business process or area]. The organization is a [describe industry/type, e.g., large electrical distribution company] using [describe the primary system, e.g., SAP S/4HANA]. Identify the top 8 audit risks for this area, organized by likelihood and potential impact. For each risk, briefly describe what a control failure looks like and what data or evidence would indicate the risk has materialized.
Risk & Planning
Engagement Scope Design
Use when you have a risk area identified and need to translate it into a structured engagement scope.
I need to design the scope for an internal audit engagement covering [describe the risk area]. The audit team has [describe data access level, e.g., direct read access to SAP transaction tables]. Draft an engagement scope statement that includes: the objective, the population and time period, the primary test approaches, the data sources required, and the key deliverables. Keep it concise enough to fit in a planning memo.
Risk & Planning
Prior Finding Pattern Analysis
Use before planning a new engagement in an area with prior audit history to identify systemic themes rather than treating each finding as isolated.
Here is a list of prior audit findings related to [describe the process area]: [paste finding titles or brief descriptions]. Analyze these findings for systemic patterns. Identify: (1) whether findings cluster around specific process steps, control types, or organizational units; (2) whether any findings suggest a root cause that was not fully addressed; and (3) what a new engagement in this area should prioritize to move beyond surface-level symptoms.
Risk & Planning
Interview Question Generator
Use when preparing for an audit interview -- especially in a new process area or with a technical stakeholder.
I am preparing to interview [describe the role, e.g., the accounts payable manager] about [describe the process, e.g., the invoice approval and payment process]. The audit objective is to understand [describe what you are evaluating, e.g., whether three-way match exceptions are being reviewed and resolved appropriately]. Generate 10 targeted interview questions that: (1) build understanding of how the process actually works; (2) surface where exceptions occur and how they are handled; and (3) identify whether controls that exist on paper are operating in practice. Avoid yes/no questions.
Fieldwork
Exception Investigation Framework
Use when you have identified an anomaly in data and need a structured framework for investigating it before drawing conclusions.
I have identified the following anomaly in [describe the data source and process area]: [describe the specific anomaly]. Before concluding this is a control issue, I want to consider all plausible explanations. Generate a structured list of: (1) benign explanations that would make this anomaly acceptable; (2) control failure explanations that would make it a finding; and (3) the specific evidence I would need to rule each explanation in or out.
Fieldwork
Population-Level Test Design
Use when you want to convert a risk hypothesis into a data test design with specific data requirements.
I want to design a population-level data test for the following audit risk: [describe the risk, e.g., sales orders billed at a price lower than the applicable condition record with no authorized exception on file]. The system is [describe the ERP, e.g., SAP S/4HANA]. Describe: (1) the specific data fields and tables I would need; (2) the logic for the test -- what comparison or filter would identify exceptions; (3) what a true positive looks like versus a false positive I should filter out; and (4) what the result set should contain to support a finding.
Fieldwork
SAP Table and Field Explainer
Use when you encounter a SAP table or field name you do not recognize and need to understand what it represents before querying it.
I am working in SAP S/4HANA and encountered the following table or field: [paste the table name, field name, or both -- e.g., VBRK-NETWR or table BSEG]. Explain in plain language: (1) what this table or field stores; (2) which business process it belongs to; (3) how it connects to other key tables in that process; and (4) what audit-relevant information it typically contains in an electrical distribution business context.
Fieldwork
Legacy Procedure Redesign
Use when you have an existing sample-based audit procedure that you want to convert into a population-level analytics test.
Here is an existing audit procedure from a prior workpaper: [paste the procedure description]. This procedure currently uses statistical sampling. Redesign it as a population-level analytics test. Describe: (1) the data source and fields required; (2) the extraction and filtering logic; (3) the threshold or criteria that defines an exception; (4) what the output dataset should look like; and (5) how results would be documented as evidence. Assume access to [describe the available tool, e.g., Alteryx and direct SAP data access].
Fieldwork
Root Cause Hypothesis Builder
Use after identifying a finding condition to develop a structured root cause analysis before presenting to management.
I have identified the following audit finding condition: [describe what was observed, e.g., 23% of purchase orders in the sample were approved after the goods receipt date, with the approval recorded retroactively]. Before concluding on root cause, I want to consider the full range of explanations. Generate a structured root cause analysis that considers: (1) process design failures -- was the control designed to prevent this?; (2) system configuration issues -- does the system allow or enforce the control?; (3) behavioral factors -- is this a training, incentive, or oversight issue?; (4) data integrity factors -- could the timing reflect a recording error rather than a control bypass? For each category, identify what evidence would confirm or rule it out.
Documentation
Audit Finding Drafter
Use after completing fieldwork when you need to document a finding in the standard four-part format.
Draft an audit finding in the standard condition / criteria / cause / effect format based on the following: [describe what you observed, the standard it should have met, and the evidence you have]. The finding should be appropriate for inclusion in a formal audit report. Use specific, factual language. Avoid conclusions that go beyond the evidence described. The audience is [describe, e.g., process management and the CAE]. Keep the total length under 250 words.
Documentation
Workpaper Summary
Use after completing a test to generate a structured workpaper summary from your notes.
Draft a workpaper summary for the following audit test. Format it with these sections: Objective, Population and Period, Procedure, Results, and Conclusion. Source information: [describe the test you performed, the data you used, the exceptions you found, and your overall conclusion]. The summary should be written so that an independent reviewer could understand the work performed and the basis for the conclusion without asking follow-up questions.
Documentation
Control Narrative Drafter
Use when documenting a business process for the first time or when updating documentation after a process change.
Draft a control narrative for the following business process: [describe the process, including the key steps, the systems involved, and the people or roles responsible at each step]. The narrative should: (1) describe the process flow from initiation to completion; (2) identify the key controls at each significant step; (3) note where automated controls exist versus manual controls; and (4) identify where the primary risks of error or fraud are located in the process. Format it as a professional audit document.
Documentation
Field Notes to Observation
Use after a fieldwork session to convert rough notes into a structured, professional observation record.
Convert the following field notes into a clean, structured audit observation: [paste your raw notes]. The observation should: (1) describe what was observed factually and without editorial language; (2) note any open questions or items requiring follow-up; (3) identify what evidence was collected and what is still needed; and (4) flag any preliminary indicators of control issues without stating conclusions. Keep the tone professional and objective.
Documentation
Technical Finding Translation
Use when you need to translate a technically detailed finding into language appropriate for an audit committee or senior executive audience.
Here is an audit finding written for a technical audience: [paste the finding]. Rewrite it for an audit committee or senior executive audience. The rewritten version should: (1) explain the issue in plain business terms without losing accuracy; (2) state the business risk or exposure clearly -- what could happen as a result of this condition; (3) avoid technical jargon unless essential, and define it if used; and (4) end with a clear statement of what remediation would look like. Target length: 150 words or fewer.
Communication
Management Response Evaluator
Use when reviewing a management response to an audit finding to assess whether it actually addresses the root cause.
Here is an audit finding: [paste the finding]. Here is management's proposed response: [paste the response]. Evaluate whether this response: (1) addresses the root cause identified in the finding or only the symptom; (2) proposes a specific, verifiable remediation action or uses vague language that cannot be tested; (3) includes a realistic and committed completion date; and (4) assigns clear ownership. Identify any gaps in the response and suggest specific language that would make it more defensible and testable.
Communication
Audit Report Section Drafter
Use when drafting the opening or summary section of an audit report, or when you need a consistent narrative across multiple findings.
Draft the [describe the section -- e.g., executive summary or background] section of an audit report for the following engagement: [describe the audit area, the scope, the key findings, and the overall conclusion]. The section should: (1) describe the purpose and scope of the engagement; (2) summarize the key findings at a level appropriate for the described audience; (3) state the overall audit conclusion clearly; and (4) be written in a professional, direct tone without hedging language. Length: [specify, e.g., one page or 300 words].
Communication
Stakeholder Briefing Prep
Use when preparing to present findings to a process owner or business leader who may be defensive or unfamiliar with audit methodology.
I need to brief [describe the stakeholder -- role, level, and likely reaction] on the following audit finding: [describe the finding and evidence]. Help me prepare for this conversation by: (1) drafting the key points I should lead with; (2) anticipating the three most likely objections or pushback responses and suggesting how to address each; (3) identifying what I should NOT say to avoid escalating defensiveness; and (4) recommending how to frame the remediation discussion so it feels collaborative rather than punitive.
Communication
Follow-Up Inquiry Draft
Use when you need to follow up on an open request for information or documentation from an auditee.
Draft a professional follow-up email to [describe the recipient and their role] requesting [describe what is needed and why -- e.g., the original purchase order documentation for five transactions flagged in testing]. Context: [describe how long the request has been outstanding and any prior communication]. The email should: (1) be direct and professional without being accusatory; (2) state clearly what is needed, by when, and why it is necessary; and (3) note the impact on the audit timeline if the request is not fulfilled. Keep it under 150 words.
Learning
Unfamiliar System Explainer
Use when you are assigned to audit a system or platform you have not worked with before and need to build context quickly.
I am an internal auditor preparing to audit [describe the system or platform -- e.g., a warehouse management system, a logistics platform, or a pricing engine]. I have limited familiarity with this system. Provide an overview that covers: (1) what the system does and what business process it supports; (2) the key data it generates and what audit-relevant information it contains; (3) the most common control risks associated with this type of system; (4) the questions I should ask IT or the system owner to understand the configuration; and (5) the data I should request access to at the start of fieldwork.
Learning
Distribution Business Context
Use when you encounter a business process or term specific to electrical distribution that you need to understand before testing it.
I am auditing [describe the process or area -- e.g., special pricing agreement reconciliation or branch inventory adjustment practices] at a large electrical distribution company. Explain: (1) how this process typically works in an electrical distribution business context; (2) what can go wrong -- the most common control failures or risk scenarios; (3) what data would typically be available to test this; and (4) what prior audit findings in this area typically look like. Assume I understand audit methodology but am newer to the distribution industry.

Keeping AI-Assisted Work Defensible

The evidence standard for AI-assisted audit work is identical to the evidence standard for any other audit work. What changes is the set of habits required to meet it.

Verify, don't just review
AI output used in a workpaper was verified against a primary source -- not just read and accepted. Verification means you checked the specific claim against the underlying data, document, or system record. "It looks right" is not verification.
Note AI assistance simply
One line in the workpaper is sufficient: "Initial draft generated with AI assistance; verified and edited by [name] on [date]." Consistent, brief, and defensible. The evidence trail is what holds the work up, not the disclosure note.
The judgment calls stay yours
AI can draft the finding. It cannot determine whether the condition is material, what the root cause priority is, or how to frame the recommendation given organizational context. Document your judgment separately from the AI-generated draft.
The evidence anchor still applies
AI-assisted findings require the same evidence trail as any other finding. The AI helped write it -- the evidence has to hold it up. No AI output substitutes for documented, traceable evidence tied to a specific transaction, record, or source document.
Confidentiality before convenience
Before pasting any data into an AI tool, ask: does this data identify a specific employee, customer, transaction, or organization? If yes, describe the pattern instead of pasting the record. Your organization's AI use policy governs what can enter external AI systems →
The auditability test
Ask: if an independent reviewer examined this workpaper without knowing AI was involved, would the evidence trail still be complete, traceable, and sufficient to support the conclusion? If the answer is no, the documentation is not ready.

Where AI Assistance Degrades the Work

AI assistance improves audit work in well-defined, documented scenarios. It degrades it in others. Knowing the difference is a practitioner judgment, not a policy question.

When the question requires current data
LLMs have knowledge cutoffs -- they do not know what happened after their training data ends. Questions about current regulatory status, current organizational facts, or current market conditions require a live source. AI can help you frame the question; it cannot answer it reliably.
When you need to build your own hypothesis first
Using AI to interpret an anomaly before forming your own hypothesis creates a blind spot. You may anchor to the AI's framing rather than following the evidence. Build your own hypothesis first, then use AI to pressure-test it or identify what you missed.
When the finding involves organizational sensitivity
AI-drafted language on sensitive findings tends toward the generic. The tone, framing, and calibration of a finding that will create organizational tension requires judgment about context, relationships, and consequences that AI does not have. Draft sensitive findings yourself.
When confidential data would enter the prompt
Specific, non-anonymized audit evidence -- personnel records, customer data, transaction details identifying individuals -- should not be entered into consumer AI products. Describe the pattern. Do not paste the records. See Auditing AI for your organization's policy obligations →
When you could not defend the output without citing the AI
If you could not explain how a conclusion was reached without referencing the AI tool, it is not ready for the workpaper. You should be able to defend every conclusion in your work independently of how you got there.
When the answer is genuinely uncertain
AI produces confident-sounding output even when the underlying question is uncertain. If you are not sure what you are looking for, AI will give you something that looks like an answer. That is the most dangerous moment to use it -- when the output substitutes for the clarity you have not yet achieved.

Beyond the Prompt Library

Custom instructions
Most AI tools allow you to set persistent instructions that apply to every conversation. For audit work, a few lines of context -- your role, your industry, your ERP environment, and your evidence standard -- dramatically improve output quality without requiring you to repeat them in every prompt. Set these once and revisit them quarterly as your work context changes.
Your personal prompt log
The most valuable prompt library is the one you build yourself. Keep a running note of the prompts you have modified from a starting template to something that works for your specific context. The delta between the starting prompt and your version is where your judgment and domain knowledge lives. That is not something a shared library can hold for you.
The 30-minute tool evaluation
When a new AI tool appears in your field of view, do not read the coverage -- evaluate the tool directly. Spend 30 minutes asking: what does it do that existing tools do not, what is the failure mode, what data would it touch in audit work, and does your organization have a position on it. Those four questions produce a more useful answer than any review article.
Team sharing
When you use a prompt that works unusually well, bring it back to the team. One useful prompt shared per engagement compounds across the whole function. A team that shares working patterns builds collective capability faster than any training program.

A Sustainable Practice for Staying Current

The AI tools and governance landscape are moving faster than any credential or curriculum can track. Staying current requires a practice, not a reading list.

One primary source
Follow IIA AI publications directly
Not commentary about them -- the publications themselves. The IIA's practitioner-level guidance on AI use in audit work is the most directly applicable layer for what you do day to day.
One practitioner community
ISACA and IIA chapter communities
AI-focused working groups and IIA chapter AI communities surface practical intelligence about what audit functions are encountering in fieldwork -- findings patterns, tool evaluations, methodology questions. That practitioner layer is more actionable than most published guidance.
Apply the 30-minute rule
Evaluate new tools directly
Every time a new AI tool gets attention in your professional network, spend 30 minutes evaluating it directly rather than reading about it. You will learn more and form a more calibrated opinion than any article provides.
Bring one thing back
Reading that changes behavior
Every time you read something about AI and audit, write down one specific thing you will do differently. Not a general takeaway. A specific change. Reading that does not produce a behavioral change is noise accumulation.
Step 4 of 5 -- Practitioner Path Auditing AI →

The conceptual depth behind the tools -- how AI systems work, the governance landscape, and what practitioners need to produce credible assurance work