The EU-AIAct extension extends the [[[DPV]]] to provide concepts such as systems, purposes, risks, roles, documentations, and assessments based on the [[[AIAct]]]. The canonical URL for EU-AIAct extension is https://w3id.org/dpv/legal/eu/aiact, the namespace for terms is https://w3id.org/dpv/legal/eu/aiact#, the suggested prefix is eu-aiact, and this document along with source and releases are available at https://github.com/w3c/dpv.
Contributing: The DPVCG welcomes participation to improve the DPV and associated resources, including expansion or refinement of concepts, requesting information and applications, and addressing open issues. See contributing guide for further information.
DPV and Related Resources
[[[DPV]]]: is the base/core specification for the 'Data Privacy Vocabulary', which is extended for Personal Data [[PD]], Locations [[LOC]], Risk Management [[RISK]], Technology [[TECH]], and [[AI]]. Specific [[LEGAL]] extensions are also provided which model jurisdiction specific regulations and concepts . To support understanding and applications of [[DPV]], various guides and resources [[GUIDES]] are provided, including a [[PRIMER]]. A Search Index of all concepts from DPV and extensions is available.
[[DPV]] and related resources are published on GitHub. For a general overview of the Data Protection Vocabularies and Controls Community Group [[DPVCG]], its history, deliverables, and activities - refer to DPVCG Website. For meetings, see the DPVCG calendar.
The peer-reviewed article “Creating A Vocabulary for Data Privacy” presents a historical overview of the DPVCG, and describes the methodology and structure of the DPV along with describing its creation. An open-access version can be accessed here, here, and here. The article Data Privacy Vocabulary (DPV) - Version 2, accepted for presentation at the 23rd International Semantic Web Conference (ISWC 2024), describes the changes made in DPV v2.
Introduction
The [[[AIAct]]] is a landmark regulation which regulates the use of Artificial Intelligence (AI) technologies. To support its implementation, the [[EU-AIAct]] extension provides relevant concepts which can be used to describe the facts of how AI is being developed and used within use-cases, and to produce the required documentation necessary for establishing or verifying compliance with the [[AIAct]]. Concepts in this extension extend relevant concepts defined in [[[DPV]]], as well as those defined in [[[AI]]] and [[[TECH]]] extensions.
At the moment, the [[EU-AIAct]] extension provides the following concepts and taxonomies as defined or interpreted from the [[AIAct]]:
AI systems, such as general purpose AI systems or those using biometrics
Capabilities, such as to produce deep fakes or perform biometric identification
Risk categorisations, such as defining serious incidents or systemtic risks at EU level
Statuses, such as market availability and supply of services
Assessments, such as conformity assessment and fundamental rights impact assessment
other concepts, such as substantial modification and reasonably forseeable misuses
Concepts for AI Act
AI System
The concept [=AISystem=] represents AI systems as defined in the AI Act.
eu-aiact:AISystem: A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments
go to full definition
eu-aiact:BiometricCategorisationSystem: An AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data unless ancillary to another commercial service and strictly necessary for objective technical reasons
go to full definition
eu-aiact:EmotionRecognitionSystem: An AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.
go to full definition
eu-aiact:GPAISystem: An AI system which is based on a general-purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems
go to full definition
eu-aiact:RemoteBiometricIdentificationSystem: An AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database.
go to full definition
eu-aiact:PostRemoteBiometriIdentificationSystem: A remote biometric identification system other than a ‘real-time’ remote biometric identification system
go to full definition
eu-aiact:RealtimeRemoteBiometriIdentificationSystem: A remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay. This comprises not only instant identification, but also limited short delays in order to avoid circumvention.
go to full definition
eu-aiact:GPAIModel: An AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities
go to full definition
eu-aiact:SafetyComponent: A component of a product or of a system which fulfils a safety function for that product or system, or the failure or malfunctioning of which endangers the health and safety of persons or property
go to full definition
Purposes & Capabilities
These concepts represent the purposes and capabilities defined within the AI-Act. The use of 'capability' here refers to the capability of the technology to produce, perform, or achieve something, which is expressed in the [[TECH]] extension as tech:Capability and extended in the [[AI]] extension as ai:Capability.
eu-aiact:BiometricCategorisation: Capability where assigning natural persons to specific categories happens on the basis of their biometric data unless ancillary to another commercial service and strictly necessary for objective technical reasons
go to full definition
eu-aiact:BiometricIdentification: Capability where the automated recognition of physical, physiological, behavioural, and psychological human features are used for the purpose of establishing an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a database
go to full definition
eu-aiact:BiometricIdentityVerification: Purpose where AI is used for automated verification of the identity of natural persons by comparing biometric data of an individual to previously provided biometric data (one-to-one verification, including authentication)
go to full definition
eu-aiact:DeepFakeGeneration: Capability to generate or manipulate image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful
go to full definition
eu-aiact:EmotionRecognition: Capability of a technology to identify or infer emotions or intentions of natural persons (on the basis of their biometric data)
go to full definition
eu-aiact:HighImpactCapabilityGPAIModel: ‘High-impact Capabilities’ in general purpose AI models means capabilities that match or exceed the capabilities recorded in the most advanced general purpose AI models
go to full definition
Risks
The AI Act has a specific definition of risk as "combination of the probability of an occurrence of harm and the severity of that harm", which is represented using the concept [=Risk=] within this extension. This definition is a more specific form of dpv:Risk, and hence the existing relations can be used to describe its severity and likelihood.
eu-aiact:Risk: The combination of the probability of an occurrence of harm and the severity of that harm
go to full definition
eu-aiact:SystemicRiskForEU: A systemic risk at Union level where it is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the internal market due to its reach, and with actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain
go to full definition
eu-aiact:RiskManagementSystem: A risk management system is a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating
go to full definition
eu-aiact:SeriousIncident: Any incident or malfunctioning of an AI system that directly or indirectly leads to any of the following: (a) the death of a person or serious damage to a person’s health; (b) a serious and irreversible disruption of the management and operation of critical infrastructure.; (ba) breach of obligations under Union law intended to protect fundamental rights; (bb) serious damage to property or the environment.
go to full definition
Data
The AI Act defines different categories of data as used to develop and deploy AI systems, such as [=TrainingData=] and [=TestingData=]. It also uses [=Biometric=] which is a special category of personal data under [[GDPR]].
eu-aiact:Biometric: Personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data
go to full definition
eu-aiact:TestingData: Data used for providing an independent evaluation of the AI system in order to confirm the expected performance of that system before its placing on the market or putting into service.
go to full definition
eu-aiact:TrainingData: Data used for training an AI system through fitting its learnable parameters
go to full definition
eu-aiact:ValidationData: Data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process, among other things, in order to prevent underfitting or overfitting; whereas the validation dataset is a separate dataset or part of the training dataset, either as a fixed or variable split.
go to full definition
Entity Roles
AI Roles describes the categorisation of entities based on the role they take in developing, providing or deploying, and using an AI System as defined within the [[AIAct]].
eu-aiact:AIOffice: Commission’s function of contributing to the implementation, monitoring and supervision of AI systems and AI governance. References in this Regulation to the Artificial Intelligence office shall be understood as references to the Commission.
go to full definition
eu-aiact:AIOperator: The provider, the product manufacturer, the deployer, the authorised representative, the importer or the distributor;
go to full definition
eu-aiact:AIDeployer: Any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity
go to full definition
eu-aiact:AIDistributor: Any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market
go to full definition
eu-aiact:AIImporter: Any natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the Union
go to full definition
eu-aiact:AIProvider: Natural or legal person, public authority, agency or other body that develops an AI system or a general purpose AI model or that has an AI system or a general purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge
go to full definition
eu-aiact:DownstreamAIProvider: A provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.
go to full definition
eu-aiact:AuthorisedRepresentative: Any natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation
go to full definition
eu-aiact:AIProductManufacturer: Entity that manufactures an AI Product
go to full definition
eu-aiact:ConformityAssessmentBody: A body that performs third-party conformity assessment activities, including testing, certification and inspection
go to full definition
eu-aiact:NotifiedBody: A conformity assessment body notified in accordance with this Regulation and other relevant Union harmonisation legislation
go to full definition
eu-aiact:NationalCompetentAuthority: The notifying authority and the market surveillance authority. As regards AI systems put into service or used by EU institutions, agencies, offices and bodies, any reference to national competent authorities or market surveillance authorities in this Regulation shall be understood as referring to the European Data Protection Supervisor.
go to full definition
eu-aiact:MarketSurveillanceAuthority: The national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020
go to full definition
eu-aiact:NotifyingAuthority: The national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring
go to full definition
Documentation
The documentation associated with AI, AI Systems, and other processes defined within the AI Act. These are described using tech:Documentation from the [[TECH]] extension.
eu-aiact:CommonSpecification: A set of technical specifications, as defined in point 4 of Article 2 of Regulation (EU) No 1025/2012 providing means to comply with certain requirements established under the AI Act
go to full definition
eu-aiact:EUDeclarationOfConformity: Document providing the EU declaration of conformity, as required by the AI Act, Article 47
go to full definition
eu-aiact:InstructionForUse: The information provided by the provider to inform the user of in particular an AI system’s intended purpose and proper use
go to full definition
eu-aiact:PostMarketMonitoringPlan: Description of the post-market monitoring plan, as required by the AI Act
go to full definition
eu-aiact:PostMarketMonitoringSystemDocumentation: Documentation of the post-market monitoring system in place, as required by the AI Act
go to full definition
eu-aiact:RealWorldTestingPlan: A document that describes the objectives, methodology, geographical, population and temporal scope, monitoring, organisation and conduct of testing in real world conditions.
go to full definition
eu-aiact:SandboxPlan: A document agreed between the participating provider and the competent authority describing the objectives, conditions, timeframe, methodology and requirements for the activities carried out within the sandbox
go to full definition
eu-aiact:SystemArchitectureDescription: Documentation specifying the software architecture
go to full definition
eu-aiact:TechnicalDocumentation: Annex IV technical documentation
go to full definition
eu-aiact:TestReport: A document that includes a summary of test activities and their results
go to full definition
Statuses
Different statuses are described or implied within the [[AIAct]], such as [=MarketAvailabilityStatus=] regarding whether the AI system is available on the market, and [=ServiceSupplyStatus=] regarding whether the AI system has been supplied. These statuses refect the requirement to describe the state of the AI system and its use, which has implications in terms of requirements and obligations.
eu-aiact:MarketAvailabilityStatus: Status associated with whether the AI system or GPAI Model is available for distribution or use on the market in the course of a commercial activity
go to full definition
eu-aiact:AvailableOnMarket: Status associated with supply of an AI system or a general purpose AI model for distribution or use on the market in the course of a commercial activity, whether in return for payment or free of charge
go to full definition
eu-aiact:AvailableOnEUMarket: Status associated with supply of an AI system or a general purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge
go to full definition
eu-aiact:AvailableOnNonEUMarket: Status associated with supply of an AI system or a general purpose AI model for distribution or use on any market except the Union market in the course of a commercial activity, whether in return for payment or free of charge
go to full definition
eu-aiact:NotAvailableOnMarket: not available on a market anywhere in EU or otherwise
go to full definition
eu-aiact:MarketPlacementStatus: Status associated with whether the AI system or GPAI Model has been 'first' made available on the market
go to full definition
eu-aiact:PlacedOnMarket: Status indicating the first making available of an AI system or a general purpose AI model on the market
go to full definition
eu-aiact:PlacedOnEUMarket: Status indicating the first making available of an AI system or a general purpose AI model on the Union market
go to full definition
eu-aiact:ServiceSupplyStatus: Status associated with whether the AI system has been supplied for use or put in to service or been recalled or withdrawn
go to full definition
eu-aiact:NotPutIntoService: Status associated with an AI system that has not been put into service
go to full definition
eu-aiact:PutIntoService: Status associated with the supply of an AI system for first use directly to the deployer or for own use for its intended purpose
go to full definition
eu-aiact:PutIntoServiceInEU: Status associated with the supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose
go to full definition
eu-aiact:RecalledFromService: An AI System that is returned to the provider or taken out of service or use
go to full definition
eu-aiact:WithdrawnFromService: An AI System that is prevented to be made available in the supply chain
go to full definition
Assessments
The [[AIAct]] requires various assessments, such as [=CEMarking=] and [=PostMarketMonitoringSystem=] as part of its obligations. These are described using [[DPV]] concepts such as dpv:CertificationSeal and dpv:ImpactAssessment.
eu-aiact:AIRegulatorySandbox: A concrete and controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision.
go to full definition
eu-aiact:AISystemPerformance: The ability of an AI system to achieve its intended purpose
go to full definition
eu-aiact:CEMarking: A marking by which a provider indicates that an AI system is in conformity with the requirements set out in Title III, Chapter 2 of this Regulation and other applicable Union legislation harmonising the conditions for the marketing of products (‘Union harmonisation legislation’) providing for its affixing
go to full definition
eu-aiact:ConformityAssessment: Assessment of the AI system performance with regards to achieving its intended purpose
go to full definition
eu-aiact:FRIA: An assessment undertaken to evaluate how the system might impact fundamental rights
go to full definition
eu-aiact:HighRiskAIAssessment: An assessment undertaken to determine whether the AI system is classified as high-risk
go to full definition
eu-aiact:InformedConsent: Informed consent means a (real world test) subject’s freely given, specific, unambiguous and voluntary expression of his or her willingness to participate in a particular testing in real world conditions, after having been informed of all aspects of the testing that are relevant to the subject’s decision to participate;
go to full definition
eu-aiact:PostMarketMonitoringSystem: All activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they
place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions
go to full definition
eu-aiact:RealWorldTestSubject: A natural person who participates in testing in real world conditions
go to full definition
eu-aiact:TestingRealWorldConditions: The temporary testing of an AI system for its intended purpose in real world conditions outside of a laboratory or otherwise simulated environment with a view to gathering reliable and robust data and to assessing and verifying the conformity of the AI system with the
requirements of this Regulation; testing in real world conditions shall not be considered as placing the AI system on the market or putting it into service within the meaning of this Regulation, provided that all conditions under Article 53 or Article 54a are fulfilled;
go to full definition
Compliance
The concepts in this section reflect the status of processing operations being in compliance with AI Act, by extending the ComplianceStatus from DPV for AI Act. It does not define the requirements for compliance itself. To indicate these, the relation dpv:hasLawfulness can be used.
eu-aiact:AIActLawfulness: Status or state associated with being lawful or legally compliant regarding AI Act
go to full definition
eu-aiact:AIActComplianceUnknown: State where lawfulness or compliance with AI Act is unknown
go to full definition
eu-aiact:AIActCompliant: State of being lawful or legally compliant for AI Act
go to full definition
eu-aiact:AIActNonCompliant: State of being unlawful or legally non-compliant for AI Act
go to full definition
Sectors
The [[AIAct]] implicitly refers to various sectors, such as when stating the high-risk uses of AI systems in Annex III. To enable referring to such implied sectors, this extension provides an ad-hoc taxonomy. This taxonomy may change in the future as sectors are also an important concept in other laws, such as [[EU-NIS2]]
eu-aiact:BorderControlSector: Sector related to border control management
go to full definition
eu-aiact:CriticalInfrastructureSector: Sector related to provision and maintainance of critical services
go to full definition
eu-aiact:DemocraticProcessSector: Sector related to administration of democratic processes
go to full definition
eu-aiact:EducationSector: Sector related to education and vocational training at any level and for any profession
go to full definition
eu-aiact:EmploymentSector: Sector related to employment including workforce management and self-employment
go to full definition
eu-aiact:JusticeSector: Sector related to administration of justice
go to full definition
eu-aiact:LawEnforcementSector: Sector related to law enforcement
go to full definition
eu-aiact:MigrationSector: Sector related to migration management
go to full definition
eu-aiact:PrivateSector: Sector comprising of entities that are privately owned
go to full definition
eu-aiact:PublicSector: Sector comprising of entities that are publicly owned e.g. by the government
go to full definition
Misc. Concepts
These concepts currently do not fit within the other stated categories, and are pooled together under a 'misc.' label while the vocabulary is being further developed.
eu-aiact:AILiteracy: Skills, knowledge and understanding that allows providers, users and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
go to full definition
eu-aiact:IntendedPurpose: The use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation
go to full definition
eu-aiact:PubliclyAccessibleSpace: Any publicly or privately owned physical place accessible to an undetermined number of natural persons, regardless of whether certain conditions for access may apply, and regardless of the potential capacity restrictions.
go to full definition
eu-aiact:ReasonablyForeseeableMisuse: The use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems
go to full definition
eu-aiact:SubstantialModification: A change to the AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment by the provider and as a result of which the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation is affected or results in a modification to the intended purpose for which the AI system has been assessed
go to full definition
High-Risk & Prohibited AI Systems
The AI Act defines specific uses of AI systems as being 'prohibited' (Article 5) or as being 'high-risk' (Article 6). To support the expression of these high-risk AI systems using DPV concepts, the following concepts are being discussed for inclusion in future versions of this vocabulary:
ProhibitedAISystem as a subtype of [=AISystem=] to represent AI systems that have been prohibited as per Article 5.
HighRiskAISystem as a subtype of [=AISystem=] to represent AI systems that have been designated as high-risk as per Article 6.
SectorialHighRiskAISystem as a subtype of HighRiskAISystem to represent AI systems that have been prohibited as per Article 6 and Annex I (sector-specific regulated uses of AI).
HighRiskApplicationAISystem as a subtype of HighRiskAISystem to represent AI systems that have been prohibited as per Article 6 and Annex III (specific applications).
NonHighRiskAISystem as a subtype of [=AISystem=] to represent AI systems that have not been prohibited as per Article 5, and are not designated as high-risk as per Article 6.
High-Risk AI Systems
The below is a list mapping the clauses in Annex III to specific purposes in DPV vocabularies.
Any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity
Any natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the Union
Skills, knowledge and understanding that allows providers, users and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
Commission’s function of contributing to the implementation, monitoring and supervision of AI systems and AI governance. References in this Regulation to the Artificial Intelligence office shall be understood as references to the Commission.
Natural or legal person, public authority, agency or other body that develops an AI system or a general purpose AI model or that has an AI system or a general purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge
A concrete and controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision.
A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments
Usage Note
Note All references to the AI Act referred to the provisional agreement published in Feb 2024
Any natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation
Status associated with supply of an AI system or a general purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge
Status associated with supply of an AI system or a general purpose AI model for distribution or use on the market in the course of a commercial activity, whether in return for payment or free of charge
Status associated with supply of an AI system or a general purpose AI model for distribution or use on any market except the Union market in the course of a commercial activity, whether in return for payment or free of charge
Personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data
Capability where assigning natural persons to specific categories happens on the basis of their biometric data unless ancillary to another commercial service and strictly necessary for objective technical reasons
An AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data unless ancillary to another commercial service and strictly necessary for objective technical reasons
Capability where the automated recognition of physical, physiological, behavioural, and psychological human features are used for the purpose of establishing an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a database
Purpose where AI is used for automated verification of the identity of natural persons by comparing biometric data of an individual to previously provided biometric data (one-to-one verification, including authentication)
A marking by which a provider indicates that an AI system is in conformity with the requirements set out in Title III, Chapter 2 of this Regulation and other applicable Union legislation harmonising the conditions for the marketing of products (‘Union harmonisation legislation’) providing for its affixing
A set of technical specifications, as defined in point 4 of Article 2 of Regulation (EU) No 1025/2012 providing means to comply with certain requirements established under the AI Act
Capability to generate or manipulate image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful
A provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.
An AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities
An AI system which is based on a general-purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems
‘High-impact Capabilities’ in general purpose AI models means capabilities that match or exceed the capabilities recorded in the most advanced general purpose AI models
Informed consent means a (real world test) subject’s freely given, specific, unambiguous and voluntary expression of his or her willingness to participate in a particular testing in real world conditions, after having been informed of all aspects of the testing that are relevant to the subject’s decision to participate;
The use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation
The notifying authority and the market surveillance authority. As regards AI systems put into service or used by EU institutions, agencies, offices and bodies, any reference to national competent authorities or market surveillance authorities in this Regulation shall be understood as referring to the European Data Protection Supervisor.
The national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring
All activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they
place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions
Any publicly or privately owned physical place accessible to an undetermined number of natural persons, regardless of whether certain conditions for access may apply, and regardless of the potential capacity restrictions.
A remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay. This comprises not only instant identification, but also limited short delays in order to avoid circumvention.
A document that describes the objectives, methodology, geographical, population and temporal scope, monitoring, organisation and conduct of testing in real world conditions.
A natural person who participates in testing in real world conditions
Usage Note
Note: The term "Real World Test Subject" is defined as just "Subject" in the AI Act. We emphasise its distinction from other subjects, e.g. AI Subject, with this phrasing of the term.
The use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems
Usage Note
Misuse should be provided through the Risk extension, along with taxonomy e.g. Misuse -> Foreseeable -> Reasonably and NonReasonably Foreseeable; and Misuse -> Unforeseeable Misuse.
An AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database.
A risk management system is a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating
A component of a product or of a system which fulfils a safety function for that product or system, or the failure or malfunctioning of which endangers the health and safety of persons or property
A document agreed between the participating provider and the competent authority describing the objectives, conditions, timeframe, methodology and requirements for the activities carried out within the sandbox
Any incident or malfunctioning of an AI system that directly or indirectly leads to any of the following: (a) the death of a person or serious damage to a person’s health; (b) a serious and irreversible disruption of the management and operation of critical infrastructure.; (ba) breach of obligations under Union law intended to protect fundamental rights; (bb) serious damage to property or the environment.
A change to the AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment by the provider and as a result of which the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation is affected or results in a modification to the intended purpose for which the AI system has been assessed
A systemic risk at Union level where it is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the internal market due to its reach, and with actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain
Data used for providing an independent evaluation of the AI system in order to confirm the expected performance of that system before its placing on the market or putting into service.
The temporary testing of an AI system for its intended purpose in real world conditions outside of a laboratory or otherwise simulated environment with a view to gathering reliable and robust data and to assessing and verifying the conformity of the AI system with the
requirements of this Regulation; testing in real world conditions shall not be considered as placing the AI system on the market or putting it into service within the meaning of this Regulation, provided that all conditions under Article 53 or Article 54a are fulfilled;
Data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process, among other things, in order to prevent underfitting or overfitting; whereas the validation dataset is a separate dataset or part of the training dataset, either as a fixed or variable split.
DPV uses the following terms from [[RDF]] and [[RDFS]] with their defined meanings:
rdf:type to denote a concept is an instance of another concept
rdfs:Class to denote a concept is a Class or a category
rdfs:subClassOf to specify the concept is a subclass (subtype, sub-category, subset) of another concept
rdf:Property to denote a concept is a property or a relation
The following external concepts are re-used within DPV:
External
Contributors
The following people have contributed to this vocabulary. The names are ordered alphabetically. The affiliations are informative do not represent formal endorsements. Affiliations may be outdated. The list is generated automatically from the contributors listed for defined concepts.
()
Beatriz Esteves (IDLab, IMEC, Ghent University)
Delaram Golpayegani (ADAPT Centre, Trinity College Dublin)
Harshvardhan J. Pandit (ADAPT Centre, Dublin City University)
Funding Acknowledgements
Funding Sponsors
The DPVCG was established as part of the SPECIAL H2020 Project, which received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 731601 from 2017 to 2019.
Harshvardhan J. Pandit was funded to work on DPV from 2020 to 2022 by the Irish Research Council's Government of Ireland Postdoctoral Fellowship Grant#GOIPD/2020/790.
The ADAPT SFI Centre for Digital Media Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant#13/RC/2106 (2018 to 2020) and Grant#13/RC/2106_P2 (2021 onwards).
Funding Acknowledgements for Contributors
The contributions of Delaram Golpayegani have received funding through the PROTECT ITN Project from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813497.
The contributions of Harshvardhan J. Pandit have been made with the financial support of Science Foundation Ireland under Grant Agreement No. 13/RC/2106_P2 at the ADAPT SFI Research Centre.