It is for us an element of proudness to read not only that the AI act has been recently approved and is going to be finalized by the Council (https://www.consilium.europa.eu/it/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/ ) but also that it includes the foundation of an AI Office
As an element of DKTS policy for innovation, there is a reminder of a contribution proposed in 2021 to the European Parliament by Gianna Martinengo as a member of INAB : International Advisory Board for AI : https://www.europarl.europa.eu/stoa/en/about/international-advisory-board
STOA International Advisory Board, 24/06/2021, 15:00-16:30 CET, contribution of Gianna Martinengo
In short (2 topics to be expressed orally during the meeting, max 100 words)
- AI (and software) systems as individual-collective drugs: AI systems are not machines, but human individual and collective (learning) PROTHESES. They should be legally treated like drugs, simply because benefits and dangers are on the human receivers of services, not on system’s behaviors resulting from its anatomy or physiology.
- New software life cycle: Assessment of any software should be delegated to a European Authority, based on regulatory sandboxes, evidence-based experiences as are clinical trials for EMA. Application developers accepting this centralized approval should be supported by incentives; their certificates should ensure priority in all contexts within the Union.
Explanation if there are questions during the meeting:
Most of the EP text asks to check the algorithms or the systems: how they are made (anatomy) and what they do (function, physiology). I believe that you have to look, above all, at the effects they have on humans, which depend on many parameters, not only on their structure (anatomy) or function (physiology). Very much like it is the case for drugs. It is not enough to know the formula (structure) or function (active principle) but the effect on REAL patients during tests eventually lasting long, which include age, sex, other parameters, etc. ... in short, the context of use, not just the production. Today any software is an interactive system acquiring and producing knowledge in continuous evolution ... as a result of the interaction of human agents (people) and artificial agents (systems). (collective intelligence)
In extenso (for a text to be sent to the EP):
1. AI and software systems as individual and collective drugs: AI systems (like all Computer-based systems) are NOT machines, rather human individual and collective (learning) PROTHESES. Machines traditionally substitute/improve/integrate human Energy in energy-intensive processes. The classical notion of a machine concerns artifacts that perform precise tasks, have clear and fixed goals, plans, behavior. Their learning skills are very limited or unavailable. AI (and in general Computer-based) systems substitute/improve/integrate human Information (and consequently also energy) processing power, individually and collectively, including learning from experience. They are meta-machines, in particular recent Deep-Learning applications. They are very similar to drugs that substitute/improve/integrate, mostly at the cellular level, human healing processes, a mixture of Information and Energy, that also learn from experience. The benefits and dangers of AI as well as other Computer-based systems is not to be attributed to their anatomy (the structure of the systems) or their physiology (the functions they are supposed to realize), rather to the effects they have or may have on the interacting humans, users of the system’s services.
Any attempt to regulate these systems will be probably useless – if not just impossible due to their complexity - unless grounded on the effects on users, not on the structure or functionalities.
They therefore should be legally treated like individual and collective drugs, not like machines: with the care for assessment delegated to a technical Agency/Authority (such as it is the case for EMA) at the European level, based on “clinical trials” i.e.: regulatory sandboxes, evidence-based experiences. Once the products or services proposed on the market by the producers are approved, no other burden should be imposed to the users.
The notion of “risk” in AI systems is very similar to the same notion for drugs: it’s never a “once for all” concept, rather a dynamic one to be continuously submitted to empirical assessment by expert committees, not to be stated statically by the stakeholders on the basis of a complex set of rules.
2. AI and software regulations have to be delegated to a European Authority: The regulations on AI systems should include any other interaction-based commercial software such as, for instance, recommender systems, be them based on AI or not. The acquisition of data from the users is also an issue totally dependent from the context of use, not just from the structure or function of the system. The regulation should be also valid for military applications (delete article 2, comma 3) because ethics does not exclude military initiatives. Many specific, proposed rules such as: “For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the user on the basis of the identification resulting from the system unless this has been verified and confirmed by at least two natural persons » are eventually to be imposed case by case by the Authority, not be required in abstract, without the evaluation of the context.
Most if not all the huge and valuable work done for this regulatory initiative may be reused when supposed to become a guideline for a European Agency. Application developers that will submit their new products and services to this centralized approval should be highly supported by incentives and their certificates of conformity guarantee priority in all possible context of use within the Union.