Glossary

AI System Documentation

Structured record of an AI system's purpose, architecture, training data, known limitations and risks, required under the EU AI Act for high-risk systems.

A

AI system documentation is a comprehensive record that describes how your AI system works, what data it uses, and what its limitations are. Under the EU AI Act, providers of high-risk AI systems must maintain technical documentation before placing their system on the market. Even for lower-risk systems, good documentation protects you during audits, investor due diligence and customer procurement processes.

What to document:

  • Purpose and intended use: What problem does your AI solve? Who are the intended users? What decisions does it support or automate?
  • Architecture: Which models do you use (e.g. GPT-4, Claude, open-source)? How are they integrated? What is the data flow from input to output?
  • Training data: What data was used to train or fine-tune? Where did it come from? How was it cleaned and labelled?
  • Known limitations: Where does your system perform poorly? What biases have you identified? What inputs produce unreliable outputs?
  • Risk assessment: What could go wrong? What is the impact on affected people? What mitigations are in place?

Start with a simple internal document and iterate. The EU AI Act compliance checker helps you identify what applies to your system. Tidal Control provides templates and workflows to build and maintain this documentation as part of your AI management system.

B

C

D

E

F

G

H

I

J

K

L

M

N

O

P

Q

R

S

T

U

V

W

Z