Skip to main content

AI Labelling in the Maersk Application Catalogue (MAC)

The AI Labelling feature in the MAC enables users to categorize applications that use or are associated with Artificial Intelligence. This helps ensure regulatory compliance (e.g., EU AI Act), enhances transparency, and improves risk management.

This guide outlines how to use the AI labelling feature effectively, including step-by-step instructions, best practices, and real-world use cases.


๐Ÿ“Œ User Guidelinesโ€‹

Accessing the AI Labelling Featureโ€‹

  1. Log in to the MAC.
  2. Navigate to the application details page for the desired application.
  3. Click on "Assign Label" and then add your AI label.

Applying AI Labelsโ€‹

As of now, we have only two AI-related labels available. You can add one or both of the following:

  • AI System
  • Utilises AI

Each label must be paired with a classification to reflect its regulatory risk level.

Available Classificationsโ€‹

ClassificationDescription
ProhibitedAI systems that pose unacceptable risk (e.g., manipulation, exploitation, social scoring). These are banned under the EU AI Act.
High-RiskAI systems used in critical areas such as infrastructure, education, employment, or law enforcement. Strictly regulated.
Limited RiskAI systems requiring transparency (e.g., chatbots, deepfakes). Lightly regulated.
Minimal RiskAI systems with minimal or no risk (e.g., spam filters, AI in video games). Mostly unregulated.
To be assessedAI systems not yet evaluated. Must be reviewed by the Cyber Security team for risk classification.

Exampleโ€‹

Letโ€™s say you have an application that uses a chatbot.
You would label it as:

  • Utilises AI โ€“ Limited Risk

โœ… Best Practicesโ€‹

โœ”๏ธ Doโ€‹

  • Apply both label and classification accurately.
  • Use the โ€œTo be assessedโ€ option when uncertain โ€” the Cyber Security team will evaluate it.
  • Keep labels up to date as features evolve.

โŒ Avoidโ€‹

  • Selecting a label without its corresponding classification.
  • Misclassifying high-risk systems as minimal-risk.
  • Ignoring transparency obligations for Limited Risk AI systems.

๐Ÿ“– Use Casesโ€‹

๐Ÿ” Use Case 1: Prohibited AI Systemโ€‹

Scenario: An internal tool that ranks employees based on personal behavior analytics.
Action:

  • Label: AI System
  • Classification: Prohibited

๐Ÿšจ Use Case 2: High-Risk AI in Hiringโ€‹

Scenario: An AI model used to screen resumes and shortlist candidates.
Action:

  • Label: Utilises AI
  • Classification: High-Risk

๐Ÿ’ฌ Use Case 3: Limited Risk Chatbotโ€‹

Scenario: A chatbot on a customer service portal.
Action:

  • Label: Utilises AI
  • Classification: Limited Risk

๐ŸŽฎ Use Case 4: Minimal Risk Gaming AIโ€‹

Scenario: An application that uses AI in a simulation game.
Action:

  • Label: AI System
  • Classification: Minimal Risk

โ“ Use Case 5: New AI Toolโ€‹

Scenario: A newly introduced tool with AI features, not yet reviewed.
Action:

  • Label: AI System
  • Classification: To be assessed

๐Ÿ› ๏ธ Troubleshootingโ€‹

IssueResolution
Classification dropdown not showingEnsure the "AI System" or "Utilises AI" label is selected first.
Canโ€™t decide the correct classificationUse โ€œTo be assessedโ€ and notify the Cyber Security team.
Duplicate label errorCheck if the label already exists for the application.

๐Ÿ“Œ FAQsโ€‹

Q1: Can an application have both AI System and Utilises AI labels?
A: Yes, if both are applicable. However, ensure that they are not marked as mutually exclusive within the system.

Q2: Who performs the risk assessment for โ€œTo be assessedโ€?
A: The Cyber Security team is responsible.

Q3: How often should labels be reviewed?
A: During each major update or as part of the quarterly governance cycle.