Contributors: (ordered alphabetically) Arthit Suriyawongkul(ADAPT Centre, Trinity College Dublin),
Beatriz Esteves(IDLab, IMEC, Ghent University),
Delaram Golpayegani(ADAPT Centre, Trinity College Dublin),
Georg P. Krog(Signatu AS),
Harshvardhan J. Pandit(AI Accountability Lab (AIAL), Trinity College Dublin).
NOTE: The affiliations are informative, do not represent formal endorsements, and may be outdated as this list is generated automatically from existing data.
The EU-AIAct extension extends the [[[DPV]]] to provide concepts such as systems, purposes, risks, roles, documentation, and assessments based on the [[[AIAct]]]. The canonical URL for EU-AIAct extension is https://w3id.org/dpv/legal/eu/aiact, the namespace for terms is https://w3id.org/dpv/legal/eu/aiact#, the suggested prefix is eu-aiact, and this document along with source and releases are available at https://github.com/w3c/dpv.
DPV Specifications: The [[DPV]] is the core specification that is extended by specific extensions. A [[PRIMER]] introduces the concepts and modelling of DPV specifications, and [[GUIDES]] describe application of DPV for specific applications and use-cases. The Search Index page provides a searchable hierarchy of all concepts. The Data Privacy Vocabularies and Controls Community Group (DPVCG) develops and manages these specifications through GitHub. For meetings, see the DPVCG calendar.
Contributing: The DPVCG welcomes participation to improve the DPV and associated resources, including expansion or refinement of concepts, requesting information and applications, and addressing open issues. See contributing guide for further information.
Introduction
The [[[AIAct]]] is a landmark regulation which regulates the use of Artificial Intelligence (AI) technologies. To support its implementation, the [[EU-AIAct]] extension provides relevant concepts which can be used to describe the facts of how AI is being developed and used within use-cases, and to produce the required documentation necessary for establishing or verifying compliance with the [[AIAct]]. Concepts in this extension extend relevant concepts defined in [[[DPV]]], as well as those defined in [[[AI]]] and [[[TECH]]] extensions.
At the moment, the [[EU-AIAct]] extension provides the following concepts and taxonomies as defined or interpreted from the [[AIAct]]:
AI systems, such as general purpose AI systems or those using biometrics
Capabilities, such as to produce deep fakes or perform biometric identification
Risk categorisations, such as defining serious incidents or systemic risks at EU level
The concept [=AISystem=] represents AI systems as defined in the AI Act.
eu-aiact:AISystem: A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
go to full definition
eu-aiact:BiometricCategorisationSystem: An AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data, unless it is ancillary to another commercial service and strictly necessary for objective technical reasons
go to full definition
eu-aiact:EmotionRecognitionSystem: An AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.
go to full definition
eu-aiact:GPAISystem: An AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems
go to full definition
eu-aiact:HighRiskAISystem: High-risk AI system according to the AI Act
go to full definition
eu-aiact:ProhibitedAISystem: Prohibited AI system according to Article 5, AI Act
go to full definition
eu-aiact:RemoteBiometricIdentificationSystem: An AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database.
go to full definition
eu-aiact:PostRemoteBiometricIdentificationSystem: A remote biometric identification system other than a real-time remote biometric identification system
go to full definition
eu-aiact:RealtimeRemoteBiometricIdentificationSystem: A remote biometric identification system, whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay, comprising not only instant identification, but also limited short delays in order to avoid circumvention
go to full definition
eu-aiact:GPAIModel: An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market
go to full definition
eu-aiact:SafetyComponent: A component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property
go to full definition
Purposes & Capabilities
These concepts represent the purposes and capabilities defined within the AI Act. The use of 'capability' here refers to the capability of the technology to produce, perform, or achieve something, which is expressed in the [[TECH]] extension as tech:Capability and extended in the [[AI]] extension as ai:Capability.
eu-aiact:BiometricCategorisation: Capability where assigning natural persons to specific categories happens on the basis of their biometric data unless ancillary to another commercial service and strictly necessary for objective technical reasons
go to full definition
eu-aiact:BiometricIdentification: Capability where the automated recognition of physical, physiological, behavioural, and psychological human features are used for the purpose of establishing the identity of a natural person by comparing biometric data of that individual to to biometric data of individuals stored in a database
go to full definition
eu-aiact:BiometricVerification: The automated, one-to-one verification, including authentication, of the identity of natural persons by comparing their biometric data to previously provided biometric data
go to full definition
eu-aiact:DeepFakeGeneration: Capability to generate or manipulate image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful;l
go to full definition
eu-aiact:EmotionRecognition: Capability of a technology to identify or infer emotions or intentions of natural persons (on the basis of their biometric data)
go to full definition
eu-aiact:HighImpactCapabilityGPAIModel: ‘High-impact Capabilities’ in general purpose AI models means capabilities that match or exceed the capabilities recorded in the most advanced general purpose AI models
go to full definition
eu-aiact:SubliminalCapability: Capability that operates beyond (below or above) the threshold of conscious awareness
go to full definition
Risk
The AI Act has a specific definition of risk as "combination of the probability of an occurrence of harm and the severity of that harm", which is represented using the concept [=Risk=] within this extension. This definition is a more specific form of dpv:Risk
, and hence the existing relations can be used to describe its severity and likelihood.
eu-aiact:DeployerHumanOversightMeasure: Human oversight measures identified by the provider before placing the AI system on the market or putting it into service and that are appropriate to be implemented by the deployer.
go to full definition
eu-aiact:ProviderHumanOversightMeasure: Human oversight measures built into the AI system by the provider before it is placed on the market or put into service.
go to full definition
eu-aiact:QualityManagementSystem: A quality management system in place in relation to an AI system that ensures compliance with the AI Act
go to full definition
eu-aiact:ReasonablyForeseeableMisuse: The use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems
go to full definition
eu-aiact:Risk: The combination of the probability of an occurrence of harm and the severity of that harm
go to full definition
eu-aiact:SystemicRisk: Q risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain
go to full definition
eu-aiact:RiskManagementSystem: A risk management system is a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating
go to full definition
eu-aiact:SeriousIncident: an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:
(a) the death of a person, or serious harm to a person’s health;
(b) a serious and irreversible disruption of the management or operation of critical infrastructure;
(c) the infringement of obligations under Union law intended to protect fundamental rights;
(d) serious harm to property or the environment;
go to full definition
Risk Levels
The interpretation of the AI Act uses terminology referring to Risk Levels which determine the level of obligations under the Act. To represent these, the concept [=RiskLevel=] is defined and instantiated to represent the commonly used concepts for representing prohibited, high-risk, transparency required, and minimal risk levels. The taxonomy also contains two additional concepts to represent not-prohibited and not-high-risk to support documenting outcomes of assessments that test prohibited and high-risk categorisations respectively. To associate the EU AI Act specific risk level, the property [=hasRiskLevel=] has been extended from dpv:hasRiskLevel
.
eu-aiact:RiskLevelPermitted: Risk Level representing "AI practices" are permitted i.e. not prohibited under the AI Act
go to full definition
eu-aiact:RiskLevelHigh: Risk Level categorising an AI system as "high-risk" as defined in the AI Act Article 6
go to full definition
eu-aiact:RiskLevelHigh-A6-1: Risk Level categorising an AI system as "high-risk" as defined in the AI Act Article 6(1), which specifies the sectorial laws mentiond in Annex I
go to full definition
eu-aiact:RiskLevelHigh-A6-2: Risk Level categorising an AI system as "high-risk" as defined in the AI Act Article 6(1), which specifies applications of AI in Annex III
go to full definition
eu-aiact:RiskLevelNotHigh: Risk Level categorising an AI system as not being "high-risk" as defined in the AI Act
go to full definition
eu-aiact:RiskLevelMinimal: Risk Level categorising an AI system as not being prohibited or high-risk or transparency required as defined in the AI Act
go to full definition
eu-aiact:RiskLevelTransparencyRequired: Risk Level categorising an AI system as not high-risk but where transparency is require as defined in the AI Act
go to full definition
eu-aiact:RiskLevelProhibited: Risk Level prohibiting "AI practices" as defined in the AI Act Article 5
go to full definition
Data
The AI Act defines different categories of data as used to develop and deploy AI systems, such as [=TrainingData=] and [=TestingData=]. It also uses [=BiometricData=] which is a special category of personal data under [[GDPR]].
eu-aiact:BiometricData: Personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data
go to full definition
eu-aiact:TestingData: Data used for providing an independent evaluation of the AI system in order to confirm the expected performance of that system before its placing on the market or putting into service
go to full definition
eu-aiact:TrainingData: Data used for training an AI system through fitting its learnable parameters
go to full definition
eu-aiact:ValidationData: Data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process in order, inter alia, to prevent underfitting or overfitting
go to full definition
Entity Roles
AI Roles describe the categorisation of entities based on the role they take in developing, providing, deploying, or using an AI system as defined within the [[AIAct]].
eu-aiact:AIOffice: the Commission’s function of contributing to the implementation, monitoring and supervision of AI systems and general-purpose AI models, and AI governance, provided for in Commission Decision of 24 January 2024; references in the AI Act to the AI Office shall be construed as references to the Commission
go to full definition
eu-aiact:AIOperator: A provider, product manufacturer, deployer, authorised representative, importer or distributor
go to full definition
eu-aiact:AIDeployer: A natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity
go to full definition
eu-aiact:AIDistributor: A natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market
go to full definition
eu-aiact:AIImporter: A natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country
go to full definition
eu-aiact:AIProductManufacturer: Entity that manufactures an AI Product
go to full definition
eu-aiact:AIProvider: A natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge
go to full definition
eu-aiact:DownstreamAIProvider: A provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.
go to full definition
eu-aiact:AuthorisedRepresentative: A natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by the AI Act
go to full definition
eu-aiact:ConformityAssessmentBody: A body that performs third-party conformity assessment activities, including testing, certification and inspection
go to full definition
eu-aiact:NotifiedBody: A conformity assessment body notified in accordance with the AI Act and other relevant Union harmonisation legislation
go to full definition
eu-aiact:HumanOverseer: The natural person to whom human oversight is assigned
go to full definition
eu-aiact:NationalCompetentAuthority: A notifying authority or a market surveillance authority; as regards AI systems put into service or used by Union institutions, agencies, offices and bodies, references to national competent authorities or market surveillance authorities in the AI Act shall be construed as references to the European Data Protection Supervisor
go to full definition
eu-aiact:MarketSurveillanceAuthority: The national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020
go to full definition
eu-aiact:NotifyingAuthority: The national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring
go to full definition
Documentation
The documentation associated with AI, AI systems, and other processes defined within the AI Act. These are described using tech:Documentation from the [[TECH]] extension.
eu-aiact:CommonSpecification: A set of technical specifications, as defined in point 4 of Article 2 of Regulation (EU) No 1025/2012 providing means to comply with certain requirements established under the AI Act
go to full definition
eu-aiact:EUDeclarationOfConformity: Document providing the EU declaration of conformity, as required by the AI Act, Article 47
go to full definition
eu-aiact:InstructionForUse: The information provided by the provider to inform the deployer of, in particular, an AI system’s intended purpose and proper use
go to full definition
eu-aiact:PostMarketMonitoringPlan: Description of the post-market monitoring plan, as required by the AI Act
go to full definition
eu-aiact:PostMarketMonitoringSystemDocumentation: Documentation of the post-market monitoring system in place, as required by the AI Act
go to full definition
eu-aiact:RealWorldTestingPlan: A document that describes the objectives, methodology, geographical, population and temporal scope, monitoring, organisation and conduct of testing in real world conditions.
go to full definition
eu-aiact:SandboxPlan: A document agreed between the participating provider and the competent authority describing the objectives, conditions, timeframe, methodology and requirements for the activities carried out within the sandbox
go to full definition
eu-aiact:SystemArchitectureDescription: Documentation specifying the software architecture
go to full definition
eu-aiact:TechnicalDocumentation: Annex IV technical documentation
go to full definition
eu-aiact:TestReport: A document that includes a summary of test activities and their results
go to full definition
Statuses
Different statuses are described or implied within the [[AIAct]], such as [=MarketAvailabilityStatus=] regarding whether the AI system is available on the market, and [=ServiceSupplyStatus=] regarding whether the AI system has been supplied. These statuses reflect the requirement to describe the state of the AI system and its use, which has implications in terms of requirements and obligations.
eu-aiact:ChangeCategory: A categorisation of the change associated with the AI system
go to full definition
eu-aiact:NonpredeterminedChange: Change associated with the AI system that has not been pre-determined
go to full definition
eu-aiact:PredeterminedChange: Change associated with the AI system that has been pre-determined
go to full definition
eu-aiact:ChangeDescription: A description of the change associated with the AI system
go to full definition
eu-aiact:MarketAvailabilityStatus: Status associated with whether the AI system or GPAI Model is available for distribution or use on the market in the course of a commercial activity
go to full definition
eu-aiact:AvailableOnMarket: Status associated with supply of an AI system or a general purpose AI model for distribution or use on the market in the course of a commercial activity, whether in return for payment or free of charge
go to full definition
eu-aiact:AvailableOnEUMarket: Status associated with supply of an AI system or a general purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge
go to full definition
eu-aiact:AvailableOnNonEUMarket: Status associated with supply of an AI system or a general purpose AI model for distribution or use on any market except the Union market in the course of a commercial activity, whether in return for payment or free of charge
go to full definition
eu-aiact:NotAvailableOnMarket: not available on a market anywhere in EU or otherwise
go to full definition
eu-aiact:MarketPlacementStatus: Status associated with whether the AI system or GPAI Model has been 'first' made available on the market
go to full definition
eu-aiact:PlacedOnMarket: Status indicating the first making available of an AI system or a general purpose AI model on the market
go to full definition
eu-aiact:PlacedOnEUMarket: Status indicating the first making available of an AI system or a general purpose AI model on the Union market
go to full definition
eu-aiact:ServiceSupplyStatus: Status associated with whether the AI system has been supplied for use or put in to service or been recalled or withdrawn
go to full definition
eu-aiact:NotPutIntoService: Status associated with an AI system that has not been put into service
go to full definition
eu-aiact:PutIntoService: Status associated with the supply of an AI system for first use directly to the deployer or for own use for its intended purpose
go to full definition
eu-aiact:PutIntoServiceInEU: Status associated with the supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose
go to full definition
eu-aiact:RecalledFromService: An AI system that is returned to the provider or taken out of service or use
go to full definition
eu-aiact:WithdrawnFromService: An AI system that is prevented to be made available in the supply chain
go to full definition
Assessments
Compliance with requirements of the [[AIAct]], e.g affixing [=CEMarking=] or implementing a [=PostMarketMonitoringSystem=], requires various types of assessments. These are described using [[DPV]] concepts such as dpv:CertificationSeal and dpv:ImpactAssessment.
eu-aiact:AIRegulatorySandbox: A controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision
go to full definition
eu-aiact:AISystemPerformance: The ability of an AI system to achieve its intended purpose
go to full definition
eu-aiact:CEMarking: A marking by which a provider indicates that an AI system is in conformity with the requirements set out in Chapter III, Section 2 and other applicable Union harmonisation legislation providing for its affixing
go to full definition
eu-aiact:ConformityAssessment: The process of demonstrating whether the requirements set out in Chapter III, Section 2 relating to a high-risk AI system have been fulfilled
go to full definition
eu-aiact:FRIA: An assessment undertaken to evaluate how the AI use of an AI system might impact fundamental rights
go to full definition
eu-aiact:HighRiskAIAssessment: An assessment undertaken to determine whether the AI system is classified as high-risk under the AI Act
go to full definition
eu-aiact:InformedConsent: Informed consent means a (real world test) subject’s freely given, specific, unambiguous and voluntary expression of his or her willingness to participate in a particular testing in real world conditions, after having been informed of all aspects of the testing that are relevant to the subject’s decision to participate
go to full definition
eu-aiact:PostMarketMonitoringSystem: All activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they
place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions
go to full definition
eu-aiact:RealWorldTestSubject: A natural person who participates in testing in real world conditions
go to full definition
eu-aiact:TestingRealWorldConditions: The temporary testing of an AI system for its intended purpose in real-world conditions outside a laboratory or otherwise simulated environment, with a view to gathering reliable and robust data and to assessing and verifying the conformity of the AI system with the requirements of the AI Act and it does not qualify as placing the AI system on the market or putting it into service within the meaning of the AI Act, provided that all the conditions laid down in Article 57 or 60 are fulfilled
go to full definition
Compliance
The concepts in this section reflect the status of processing operations being in compliance with AI Act, by extending the ComplianceStatus from DPV for AI Act. It does not define the requirements for compliance itself. To indicate these, the relation dpv:hasLawfulness can be used.
eu-aiact:AIActLawfulness: Status or state associated with being lawful or legally compliant regarding AI Act
go to full definition
eu-aiact:AIActComplianceUnknown: State where lawfulness or compliance with AI Act is unknown
go to full definition
eu-aiact:AIActCompliant: State of being lawful or legally compliant for AI Act
go to full definition
eu-aiact:AIActNonCompliant: State of being unlawful or legally non-compliant for AI Act
go to full definition
Sectors
The [[AIAct]] implicitly refers to various sectors, such as when stating the high-risk uses of AI systems in Annex III. To enable referring to such implied sectors, this extension provides an ad-hoc taxonomy. This taxonomy may change in the future as sectors are also an important concept in other laws, such as [[EU-NIS2]]
eu-aiact:BorderControlSector: Sector related to border control management
go to full definition
eu-aiact:CriticalInfrastructureSector: Sector related to provision and maintenance of critical services
go to full definition
eu-aiact:DemocraticProcessSector: Sector related to administration of democratic processes
go to full definition
eu-aiact:EducationSector: Sector related to education and vocational training at any level and for any profession
go to full definition
eu-aiact:EmploymentSector: Sector related to employment including workforce management and self-employment
go to full definition
eu-aiact:JusticeSector: Sector related to administration of justice
go to full definition
eu-aiact:LawEnforcementSector: Sector related to law enforcement
go to full definition
eu-aiact:MigrationSector: Sector related to migration management
go to full definition
eu-aiact:PrivateSector: Sector comprising of entities that are privately owned
go to full definition
eu-aiact:PublicSector: Sector comprising of entities that are publicly owned e.g. by the government
go to full definition
Misc. Concepts
These concepts currently do not fit within the other stated categories, and are pooled together under a 'misc.' label while the vocabulary is being further developed.
eu-aiact:AILiteracy: Skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause
go to full definition
eu-aiact:IntendedPurpose: The use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation
go to full definition
eu-aiact:PubliclyAccessibleSpace: Any publicly or privately owned physical place accessible to an undetermined number of natural persons, regardless of whether certain conditions for access may apply, and regardless of the potential capacity restrictions
go to full definition
eu-aiact:SubstantialModification: A change to an AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment carried out by the provider and as a result of which the compliance of the AI system with the requirements set out in Chapter III, Section 2 is affected or results in a modification to the intended purpose for which the AI system has been assessed
go to full definition
High-Risk & Prohibited AI Systems
High-Risk AI Systems
The AI Act describes high-risk AI systems in Article 6, with two categories: Article 6(1) which refers to Annex I containing sectorial laws, and Article 6(2) referring to the specific AI applications in Annex III. The risk levels section earlier in the document described the corresponding risk levels that reflect this structure. To represent [=AISystem=] that are thus categorised as high-risk, [=HighRiskAISystem=] is provided. To specifically refer to AI Systems that are high-risk due to sectorial laws Annex I, the concept [=HighRiskAISystem-A6-1=] is defined. Similarly, to represent AI systems that are high-risk due to applications in Annex III, the concept [=HighRiskAISystem-A6-2=] is defined.
Both Annex I defines several high-risk systems. These are represented as instances of [=HighRiskAISystem=] and are more specific forms of [=HighRiskAISystem-A6-1=] (i.e. using `skos:broader`). Similarly, Annex III contains several applications, which are represented as more specific forms of [=HighRiskAISystem-A6-2=]. Adopters can declare their system is a high-risk implementation of one of these specific clauses by using the appropriate SKOS relations.
In addition to represent each clause of the specific annexes as an instance of [=HighRiskAISystem=], future work involves also modelling the specifics of the criteria to assist detection of when an AI system is high-risk. For this, each clause, for example in Annex III, is expressed as an [=AISystem=] with additional metadata about its purpose, data involved, and so on. This represents a 'pattern' that can be checked, for example using [[[SHACL]]], to identify which AI systems are high-risk and under which clause in Annex III. Below is a list of relevant concepts identified for modelling each high-risk AI system defined in Annex III.
eu-aiact:HighRiskAISystem-A6-1: High-risk AI system according to Article 6(1) of the AI Act which refers to sectorial laws in Annex I
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-1: High-risk AI system according to Annex I, point 1: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC (OJ L 157, 9.6.2006, p. 24)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-10: High-risk AI system according to Annex I, point 10: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) 2016/426 of the European Parliament and of the Council of 9 March 2016 on appliances burning gaseous fuels and repealing Directive 2009/142/EC (OJ L 81, 31.3.2016, p. 99)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-11: High-risk AI system according to Annex I, point 11: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-12: High-risk AI system according to Annex I, point 12: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-13: High-risk AI system according to Annex I, point 13: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-14: High-risk AI system according to Annex I, point 14: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-15: High-risk AI system according to Annex I, point 15: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-16: High-risk AI system according to Annex I, point 16: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-17: High-risk AI system according to Annex I, point 17: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-18: High-risk AI system according to Annex I, point 18: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-19: High-risk AI system according to Annex I, point 19: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-2: High-risk AI system according to Annex I, point 2: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of toys (OJ L 170, 30.6.2009, p. 1)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-20: High-risk AI system according to Annex I, point 20: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1), in so far as the design, production and placing on the market of aircrafts referred to in Article 2(1), points (a) and (b) thereof, where it concerns unmanned aircraft and their engines, propellers, parts and equipment to control them remotely, are concerned.
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-3: High-risk AI system according to Annex I, point 3: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Directive 2013/53/EU of the European Parliament and of the Council of 20 November 2013 on recreational craft and personal watercraft and repealing Directive 94/25/EC (OJ L 354, 28.12.2013, p. 90);
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-4: High-risk AI system according to Annex I, point 4: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Directive 2014/33/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to lifts and safety components for lifts (OJ L 96, 29.3.2014, p. 251)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-5: High-risk AI system according to Annex I, point 5: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Directive 2014/34/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to equipment and protective systems intended for use in potentially explosive atmospheres (OJ L 96, 29.3.2014, p. 309);
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-6: High-risk AI system according to Annex I, point 6: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of radio equipment and repealing Directive 1999/5/EC (OJ L 153, 22.5.2014, p. 62)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-7: High-risk AI system according to Annex I, point 7: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Directive 2014/68/EU of the European Parliament and of the Council of 15 May 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of pressure equipment (OJ L 189, 27.6.2014, p. 164)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-8: High-risk AI system according to Annex I, point 8: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Regulation (EU) 2016/424 of the European Parliament and of the Council of 9 March 2016 on cableway installations and repealing Directive 2000/9/EC (OJ L 81, 31.3.2016, p. 1)
go to full definition
eu-aiact:HighRiskAISystem-AnnexI-9: High-risk AI system according to Annex I, point 9: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) 2016/425 of the European Parliament and of the Council of 9 March 2016 on personal protective equipment and repealing Council Directive 89/686/EEC (OJ L 81, 31.3.2016, p. 51)
go to full definition
eu-aiact:HighRiskAISystem-A6-2: High-risk AI system according to Article 6(2) of the AI Act which refers to specific applications in Annex III
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-1-a: High-risk AI system according to Annex III, Point 1a: Biometrics, remote biometric identification systems. This shall not include AI systems intended to be used for biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-1-b: High-risk AI system according to Annex III, Point 1b: Biometrics, AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-1-c: High-risk AI system according to Annex III, Point 1c: Biometrics, AI systems intended to be used for emotion recognition.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-2: High-risk AI system according to Annex III, Point 2: Critical infrastructure: AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-3-a: High-risk AI system according to Annex III, Point 3a: AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels;
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-3-b: High-risk AI system according to Annex III, Point 3b: Education and vocational training: AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-3-c: High-risk AI system according to Annex III, Point 3c: Education and vocational training: AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-3-d: High-risk AI system according to Annex III, Point 3d: Education and vocational training: AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions at all levels
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-4-a: High-risk AI system according to Annex III, Point 4a: Employment, workers management and access to self-employment: AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-4-b: High-risk AI system according to Annex III, Point 4b: Employment, workers management and access to self-employment: AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-5-a: High-risk AI system according to Annex III, Point 5a: Access to and enjoyment of essential private services and essential public services and benefits: AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-5-b: High-risk AI system according to Annex III, Point 5b: Access to and enjoyment of essential private services and essential public services and benefits: AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-5-c: High-risk AI system according to Annex III, Point 5c: Access to and enjoyment of essential private services and essential public services and benefits: AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-5-d: High-risk AI system according to Annex III, Point 5d: Access to and enjoyment of essential private services and essential public services and benefits: AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-6-a: High-risk AI system according to Annex III, Point 6a: Law enforcement, in so far as their use is permitted under relevant Union or national law: Law enforcement, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies in support of law enforcement authorities or on their behalf to assess the risk of a natural person becoming the victim of criminal offences.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-6-b: High-risk AI system according to Annex III, Point 6b: Law enforcement, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities as polygraphs or similar tools.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-6-c: High-risk AI system according to Annex III, Point 6c: AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies, in support of law enforcement authorities to evaluate the reliability of evidence in the course of the investigation or prosecution of criminal offences.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-6-d: High-risk AI system according to Annex III, Point 6d: Law enforcement, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by law enforcement authorities or on their behalf or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for assessing the risk of a natural person offending or re-offending not solely on the basis of the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680, or to assess personality traits and characteristics or past criminal behaviour of natural persons or groups.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-6-e: High-risk AI system according to Annex III, Point 6e: Law enforcement, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of the detection, investigation or prosecution of criminal offences.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-7-a: High-risk AI system according to Annex III, Point 7a: Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies as polygraphs or similar tools.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-7-b: High-risk AI system according to Annex III, Point 7b: Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or who has entered into the territory of a Member State.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-7-c: High-risk AI system according to Annex III, Point 7c: Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assist competent public authorities for the examination of applications for asylum, visa or residence permits and for associated complaints with regard to the eligibility of the natural persons applying for a status, including related assessments of the reliability of evidence.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-7-d: High-risk AI system according to Annex III, Point 7d: Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by or on behalf of competent public authorities, or by Union institutions, bodies, offices or agencies, in the context of migration, asylum or border control management, for the purpose of detecting, recognising or identifying natural persons, with the exception of the verification of travel documents.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-8-a: High-risk AI system according to Annex III, Point 8a: Administration of justice and democratic processes: AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution.
go to full definition
eu-aiact:HighRiskAISystem-AnnexIII-8-b: High-risk AI system according to Annex III, Point 8b: Administration of justice and democratic processes: AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems to the output of which natural persons are not directly exposed, such as tools used to organise, optimise or structure political campaigns from an administrative or logistical point of view.
go to full definition
Prohibited AI Systems
The concept [=ProhibitedAISystem=] represents [=AISystem=] that are prohibited under the AI Act's Article 5. Instances of these concepts are provided for each of the clauses in Article 5, with the relation [=hasRiskLevel=] [=RiskLevelProhibited=] to also indicate their risk level as prohibited.
eu-aiact:ProhibitedAISystem-A5-1-a: Prohibited AI system according to Article 5(1a) of the AI Act, indicating an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.
go to full definition
eu-aiact:ProhibitedAISystem-A5-1-b: Prohibited AI system according to Article 5(1b) of the AI Act, indicating an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.
go to full definition
eu-aiact:ProhibitedAISystem-A5-1-c: Prohibited AI system according to Article 5(1c) of the AI Act, indicating AI systems that evaluate or classify of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or disproportionate to their social behaviour or its gravity;
go to full definition
eu-aiact:ProhibitedAISystem-A5-1-d: Prohibited AI system according to Article 5(1d) of the AI Act, indicating AI system used for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity
go to full definition
eu-aiact:ProhibitedAISystem-A5-1-e: Prohibited AI system according to Article 5(1e) of the AI Act, indicating AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage
go to full definition
eu-aiact:ProhibitedAISystem-A5-1-f: Prohibited AI system according to Article 5(1f) of the AI Act, indicating AI systems that infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is
intended to be put in place or into the market for medical or safety reasons
go to full definition
eu-aiact:ProhibitedAISystem-A5-1-g: Prohibited AI system / risk level according to Article 5(1g) of the AI Act, indicating biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political
opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does
not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or
categorizing of biometric data in the area of law enforcement.
go to full definition
eu-aiact:ProhibitedAISystem-A5-1-h: Prohibited AI system according to Article 5(1h) of the AI Act, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives:
(i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human
beings, as well as the search for missing persons;
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or
a genuine and present or genuine and foreseeable threat of a terrorist attack;
(iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of
conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in
Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for
a maximum period of at least four years.
go to full definition
A natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity
A natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country
Skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause
Usage Note
AI Literacy should take into account the subject's level of technical knowledge, experience, education and training -- these can be represented through the corresponding concepts in the PD extension
the Commission’s function of contributing to the implementation, monitoring and supervision of AI systems and general-purpose AI models, and AI governance, provided for in Commission Decision of 24 January 2024; references in the AI Act to the AI Office shall be construed as references to the Commission
A natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge
A controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision
A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Usage Note
Note All references to the AI Act referred to the provisional agreement published in Feb 2024
A natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by the AI Act
Status associated with supply of an AI system or a general purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge
Status associated with supply of an AI system or a general purpose AI model for distribution or use on the market in the course of a commercial activity, whether in return for payment or free of charge
Status associated with supply of an AI system or a general purpose AI model for distribution or use on any market except the Union market in the course of a commercial activity, whether in return for payment or free of charge
Capability where assigning natural persons to specific categories happens on the basis of their biometric data unless ancillary to another commercial service and strictly necessary for objective technical reasons
An AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data, unless it is ancillary to another commercial service and strictly necessary for objective technical reasons
Personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data
Capability where the automated recognition of physical, physiological, behavioural, and psychological human features are used for the purpose of establishing the identity of a natural person by comparing biometric data of that individual to to biometric data of individuals stored in a database
The automated, one-to-one verification, including authentication, of the identity of natural persons by comparing their biometric data to previously provided biometric data
A marking by which a provider indicates that an AI system is in conformity with the requirements set out in Chapter III, Section 2 and other applicable Union harmonisation legislation providing for its affixing
A set of technical specifications, as defined in point 4 of Article 2 of Regulation (EU) No 1025/2012 providing means to comply with certain requirements established under the AI Act
Capability to generate or manipulate image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful;l
Human oversight measures identified by the provider before placing the AI system on the market or putting it into service and that are appropriate to be implemented by the deployer.
A provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.
An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market
An AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems
‘High-impact Capabilities’ in general purpose AI models means capabilities that match or exceed the capabilities recorded in the most advanced general purpose AI models
High-risk AI system according to Annex I, point 1: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC (OJ L 157, 9.6.2006, p. 24)
High-risk AI system according to Annex I, point 10: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) 2016/426 of the European Parliament and of the Council of 9 March 2016 on appliances burning gaseous fuels and repealing Directive 2009/142/EC (OJ L 81, 31.3.2016, p. 99)
High-risk AI system according to Annex I, point 11: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1)
High-risk AI system according to Annex I, point 12: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176)
High-risk AI system according to Annex I, point 13: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72)
High-risk AI system according to Annex I, point 14: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52)
High-risk AI system according to Annex I, point 15: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1)
High-risk AI system according to Annex I, point 16: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146)
High-risk AI system according to Annex I, point 17: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44)
High-risk AI system according to Annex I, point 18: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1)
High-risk AI system according to Annex I, point 19: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1)
High-risk AI system according to Annex I, point 2: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of toys (OJ L 170, 30.6.2009, p. 1)
High-risk AI system according to Annex I, point 20: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1), in so far as the design, production and placing on the market of aircrafts referred to in Article 2(1), points (a) and (b) thereof, where it concerns unmanned aircraft and their engines, propellers, parts and equipment to control them remotely, are concerned.
High-risk AI system according to Annex I, point 3: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Directive 2013/53/EU of the European Parliament and of the Council of 20 November 2013 on recreational craft and personal watercraft and repealing Directive 94/25/EC (OJ L 354, 28.12.2013, p. 90);
High-risk AI system according to Annex I, point 4: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Directive 2014/33/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to lifts and safety components for lifts (OJ L 96, 29.3.2014, p. 251)
High-risk AI system according to Annex I, point 5: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Directive 2014/34/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to equipment and protective systems intended for use in potentially explosive atmospheres (OJ L 96, 29.3.2014, p. 309);
High-risk AI system according to Annex I, point 6: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of radio equipment and repealing Directive 1999/5/EC (OJ L 153, 22.5.2014, p. 62)
High-risk AI system according to Annex I, point 7: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Directive 2014/68/EU of the European Parliament and of the Council of 15 May 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of pressure equipment (OJ L 189, 27.6.2014, p. 164)
High-risk AI system according to Annex I, point 8: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by Regulation (EU) 2016/424 of the European Parliament and of the Council of 9 March 2016 on cableway installations and repealing Directive 2000/9/EC (OJ L 81, 31.3.2016, p. 1)
High-risk AI system according to Annex I, point 9: AI system that is intended to be used as a safety component of a product, or the AI system is itself a product, covered by
Regulation (EU) 2016/425 of the European Parliament and of the Council of 9 March 2016 on personal protective equipment and repealing Council Directive 89/686/EEC (OJ L 81, 31.3.2016, p. 51)
High-risk AI system according to Annex III, Point 1a: Biometrics, remote biometric identification systems. This shall not include AI systems intended to be used for biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be.
High-risk AI system according to Annex III, Point 1b: Biometrics, AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics.
High-risk AI system according to Annex III, Point 2: Critical infrastructure: AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity
High-risk AI system according to Annex III, Point 3a: AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels;
High-risk AI system according to Annex III, Point 3b: Education and vocational training: AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels
High-risk AI system according to Annex III, Point 3c: Education and vocational training: AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels
High-risk AI system according to Annex III, Point 3d: Education and vocational training: AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions at all levels
High-risk AI system according to Annex III, Point 4a: Employment, workers management and access to self-employment: AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates
High-risk AI system according to Annex III, Point 4b: Employment, workers management and access to self-employment: AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships
High-risk AI system according to Annex III, Point 5a: Access to and enjoyment of essential private services and essential public services and benefits: AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services.
High-risk AI system according to Annex III, Point 5b: Access to and enjoyment of essential private services and essential public services and benefits: AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud.
High-risk AI system according to Annex III, Point 5c: Access to and enjoyment of essential private services and essential public services and benefits: AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.
High-risk AI system according to Annex III, Point 5d: Access to and enjoyment of essential private services and essential public services and benefits: AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems.
High-risk AI system according to Annex III, Point 6a: Law enforcement, in so far as their use is permitted under relevant Union or national law: Law enforcement, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies in support of law enforcement authorities or on their behalf to assess the risk of a natural person becoming the victim of criminal offences.
High-risk AI system according to Annex III, Point 6b: Law enforcement, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities as polygraphs or similar tools.
High-risk AI system according to Annex III, Point 6c: AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies, in support of law enforcement authorities to evaluate the reliability of evidence in the course of the investigation or prosecution of criminal offences.
High-risk AI system according to Annex III, Point 6d: Law enforcement, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by law enforcement authorities or on their behalf or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for assessing the risk of a natural person offending or re-offending not solely on the basis of the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680, or to assess personality traits and characteristics or past criminal behaviour of natural persons or groups.
High-risk AI system according to Annex III, Point 6e: Law enforcement, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of the detection, investigation or prosecution of criminal offences.
High-risk AI system according to Annex III, Point 7a: Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies as polygraphs or similar tools.
High-risk AI system according to Annex III, Point 7b: Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or who has entered into the territory of a Member State.
High-risk AI system according to Annex III, Point 7c: Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assist competent public authorities for the examination of applications for asylum, visa or residence permits and for associated complaints with regard to the eligibility of the natural persons applying for a status, including related assessments of the reliability of evidence.
High-risk AI system according to Annex III, Point 7d: Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national law: AI systems intended to be used by or on behalf of competent public authorities, or by Union institutions, bodies, offices or agencies, in the context of migration, asylum or border control management, for the purpose of detecting, recognising or identifying natural persons, with the exception of the verification of travel documents.
High-risk AI system according to Annex III, Point 8a: Administration of justice and democratic processes: AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution.
High-risk AI system according to Annex III, Point 8b: Administration of justice and democratic processes: AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems to the output of which natural persons are not directly exposed, such as tools used to organise, optimise or structure political campaigns from an administrative or logistical point of view.
Informed consent means a (real world test) subject’s freely given, specific, unambiguous and voluntary expression of his or her willingness to participate in a particular testing in real world conditions, after having been informed of all aspects of the testing that are relevant to the subject’s decision to participate
The use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation
A notifying authority or a market surveillance authority; as regards AI systems put into service or used by Union institutions, agencies, offices and bodies, references to national competent authorities or market surveillance authorities in the AI Act shall be construed as references to the European Data Protection Supervisor
Change associated with the AI system that has not been pre-determined
Usage Note
What is considered "pre-determined" is not defined with this concept or within the AI Act, therefore if there are specific interpretations of "pre-determined", such as within a jurisdiction, sector, or for a particular technology, then this concept should be extended accordingly
The national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring
All activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they
place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions
Change associated with the AI system that has been pre-determined
Usage Note
What is considered "pre-determined" is not defined with this concept or within the AI Act, therefore if there are specific interpretations of "pre-determined", such as within a jurisdiction, sector, or for a particular technology, then this concept should be extended accordingly
Prohibited AI system according to Article 5(1a) of the AI Act, indicating an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.
Prohibited AI system according to Article 5(1b) of the AI Act, indicating an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.
Prohibited AI system according to Article 5(1c) of the AI Act, indicating AI systems that evaluate or classify of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or disproportionate to their social behaviour or its gravity;
Prohibited AI system according to Article 5(1d) of the AI Act, indicating AI system used for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity
Prohibited AI system according to Article 5(1e) of the AI Act, indicating AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage
Prohibited AI system according to Article 5(1f) of the AI Act, indicating AI systems that infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is
intended to be put in place or into the market for medical or safety reasons
Prohibited AI system / risk level according to Article 5(1g) of the AI Act, indicating biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political
opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does
not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or
categorizing of biometric data in the area of law enforcement.
Prohibited AI system according to Article 5(1h) of the AI Act, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives:
(i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human
beings, as well as the search for missing persons;
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or
a genuine and present or genuine and foreseeable threat of a terrorist attack;
(iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of
conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in
Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for
a maximum period of at least four years.
Any publicly or privately owned physical place accessible to an undetermined number of natural persons, regardless of whether certain conditions for access may apply, and regardless of the potential capacity restrictions
A remote biometric identification system, whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay, comprising not only instant identification, but also limited short delays in order to avoid circumvention
A document that describes the objectives, methodology, geographical, population and temporal scope, monitoring, organisation and conduct of testing in real world conditions.
A natural person who participates in testing in real world conditions
Usage Note
Note: The term "Real World Test Subject" is defined as just "Subject" in the AI Act. We emphasise its distinction from other subjects, e.g. AI Subject, with this phrasing of the term.
The use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems
Usage Note
Misuse should be provided through the Risk extension, along with taxonomy e.g. Misuse -> Foreseeable -> Reasonably and NonReasonably Foreseeable; and Misuse -> Unforeseeable Misuse.
Source
AIA (Art. 3(13))
Date Created
2024-04-10
Date Modified
2025-06-19
Contributors
Arthit Suriyawongkul, Delaram Golpayegani, Georg P. Krog, Harshvardhan J. Pandit
An AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database.
Risk Level categorising an AI system as not being prohibited or high-risk or transparency required as defined in the AI Act
Usage Note
The minimal risk level does not indicate there is no risk, but instead indicates that it is not sufficient to be regulated under the categorisations of the AI Act
Date Created
2025-04-03
Contributors
Arthit Suriyawongkul, Delaram Golpayegani, Georg P. Krog, Harshvardhan J. Pandit
A risk management system is a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating
A component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property
A document agreed between the participating provider and the competent authority describing the objectives, conditions, timeframe, methodology and requirements for the activities carried out within the sandbox
an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:
(a) the death of a person, or serious harm to a person’s health;
(b) a serious and irreversible disruption of the management or operation of critical infrastructure;
(c) the infringement of obligations under Union law intended to protect fundamental rights;
(d) serious harm to property or the environment;
A change to an AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment carried out by the provider and as a result of which the compliance of the AI system with the requirements set out in Chapter III, Section 2 is affected or results in a modification to the intended purpose for which the AI system has been assessed
Q risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain
Data used for providing an independent evaluation of the AI system in order to confirm the expected performance of that system before its placing on the market or putting into service
The temporary testing of an AI system for its intended purpose in real-world conditions outside a laboratory or otherwise simulated environment, with a view to gathering reliable and robust data and to assessing and verifying the conformity of the AI system with the requirements of the AI Act and it does not qualify as placing the AI system on the market or putting it into service within the meaning of the AI Act, provided that all the conditions laid down in Article 57 or 60 are fulfilled
Data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process in order, inter alia, to prevent underfitting or overfitting
DPV uses the following terms from [[RDF]] and [[RDFS]] with their defined meanings:
rdf:type to denote a concept is an instance of another concept
rdfs:Class to denote a concept is a Class or a category
rdfs:subClassOf to specify the concept is a subclass (subtype, sub-category, subset) of another concept
rdf:Property to denote a concept is a property or a relation
The following external concepts are re-used within DPV:
External
Future Work
Funding Acknowledgements
Funding Sponsors
The DPVCG was established as part of the SPECIAL H2020 Project, which received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 731601 from 2017 to 2019. Continued developments have been funded under: RECITALS Project funded under the EU's Horizon program with grant agreement No. 101168490.
Harshvardhan J. Pandit was funded to work on DPV from 2020 to 2022 by the Irish Research Council's Government of Ireland Postdoctoral Fellowship Grant#GOIPD/2020/790.
The ADAPT SFI Centre for Digital Media Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant#13/RC/2106 (2018 to 2020) and Grant#13/RC/2106_P2 (2021 onwards).
Funding Acknowledgements for Contributors
The contributions of Delaram Golpayegani have received funding through the PROTECT ITN Project from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813497, in particular through the development of AI Risk Ontology (AIRO) and Vocabulary of AI Risks (VAIR) which have been integrated in to this extension.
The contributions of Harshvardhan J. Pandit have been made with the financial support of Science Foundation Ireland under Grant Agreement No. 13/RC/2106_P2 at the ADAPT SFI Research Centre; and the AI Accountability Lab (AIAL) which is supported by grants from following groups: the AI Collaborative, an Initiative of the Omidyar Group; Luminate; the Bestseller Foundation; and the John D. and Catherine T. MacArthur Foundation.