AI Labelling in the Maersk Application Catalogue (MAC)
The AI Labelling feature in the MAC enables users to categorize applications that use or are associated with Artificial Intelligence. This helps ensure regulatory compliance (e.g., EU AI Act), enhances transparency, and improves risk management.
This guide outlines how to use the AI labelling feature effectively, including step-by-step instructions, best practices, and real-world use cases.
๐ User Guidelinesโ
Accessing the AI Labelling Featureโ
- Log in to the MAC.
- Navigate to the application details page for the desired application.
- Click on "Assign Label" and then add your AI label.
Applying AI Labelsโ
As of now, we have only two AI-related labels available. You can add one or both of the following:
AI System
Utilises AI
Each label must be paired with a classification to reflect its regulatory risk level.
Available Classificationsโ
Classification | Description |
---|---|
Prohibited | AI systems that pose unacceptable risk (e.g., manipulation, exploitation, social scoring). These are banned under the EU AI Act. |
High-Risk | AI systems used in critical areas such as infrastructure, education, employment, or law enforcement. Strictly regulated. |
Limited Risk | AI systems requiring transparency (e.g., chatbots, deepfakes). Lightly regulated. |
Minimal Risk | AI systems with minimal or no risk (e.g., spam filters, AI in video games). Mostly unregulated. |
To be assessed | AI systems not yet evaluated. Must be reviewed by the Cyber Security team for risk classification. |
Exampleโ
Letโs say you have an application that uses a chatbot.
You would label it as:
Utilises AI
โLimited Risk
โ Best Practicesโ
โ๏ธ Doโ
- Apply both label and classification accurately.
- Use the โTo be assessedโ option when uncertain โ the Cyber Security team will evaluate it.
- Keep labels up to date as features evolve.
โ Avoidโ
- Selecting a label without its corresponding classification.
- Misclassifying high-risk systems as minimal-risk.
- Ignoring transparency obligations for Limited Risk AI systems.
๐ Use Casesโ
๐ Use Case 1: Prohibited AI Systemโ
Scenario: An internal tool that ranks employees based on personal behavior analytics.
Action:
- Label:
AI System
- Classification:
Prohibited
๐จ Use Case 2: High-Risk AI in Hiringโ
Scenario: An AI model used to screen resumes and shortlist candidates.
Action:
- Label:
Utilises AI
- Classification:
High-Risk
๐ฌ Use Case 3: Limited Risk Chatbotโ
Scenario: A chatbot on a customer service portal.
Action:
- Label:
Utilises AI
- Classification:
Limited Risk
๐ฎ Use Case 4: Minimal Risk Gaming AIโ
Scenario: An application that uses AI in a simulation game.
Action:
- Label:
AI System
- Classification:
Minimal Risk
โ Use Case 5: New AI Toolโ
Scenario: A newly introduced tool with AI features, not yet reviewed.
Action:
- Label:
AI System
- Classification:
To be assessed
๐ ๏ธ Troubleshootingโ
Issue | Resolution |
---|---|
Classification dropdown not showing | Ensure the "AI System" or "Utilises AI" label is selected first. |
Canโt decide the correct classification | Use โTo be assessedโ and notify the Cyber Security team. |
Duplicate label error | Check if the label already exists for the application. |
๐ FAQsโ
Q1: Can an application have both AI System
and Utilises AI
labels?
A: Yes, if both are applicable. However, ensure that they are not marked as mutually exclusive within the system.
Q2: Who performs the risk assessment for โTo be assessedโ?
A: The Cyber Security team is responsible.
Q3: How often should labels be reviewed?
A: During each major update or as part of the quarterly governance cycle.