The Prohibited Artificial Intelligence Practice

dc.contributor.authorBulgakova, Daria
dc.date.accessioned2026-03-02T15:13:02Z
dc.date.issued2023
dc.descriptionThe Artificial Intelligence (AI) act is ‘a good moment to take stock of what it can do and what as individuals and as a society we want it to do’. According to AI act Article 5 para 1 point (a), ‘the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm’ — prohibited. Although the AI act defines the concept of an AI system, it fails to provide clarity on what is prohibited and, indeed, generalised law-making ‘playground’ with (1) ‘subliminal techniques’ and (2) ‘beyond a person’s consciousness’ and (3) ‘material distortion of a person’s behaviour’ and (4) ‘psychological harm’ criteria that are lack of interpretation.
dc.description.abstractThe research strives to provide a comprehensive understanding of the regulation of Artificial Intelligence (AI), known as the AI act of the European Union, with a specific focus on the regulatory challenges related to the prohibition of AI systems that deploy subliminal techniques. To achieve this, the author proposes the perspective of metaverse to enhance the user experience and biometric psychography to avoid reality eye-tracking-based models. However, the current AI act needs to be prepared to address biometrics, which merely repeats the GDPR, giving a hand to AI market growth. Regardless, the author offers four key contributions. Firstly, it shows up a course on the prohibition of AI systems contrasting to the pupillometry market that strive for an opposite course. Secondly, it clarifies the image of subliminal techniques beyond a person’s consciousness of Article 5 para 1 point (a) with reference to the ‘vulnerability’ urge as per point (b). Thirdly, the research compiles perspicuity of ‘psychological harm’ criterion through the assessment of case law practice. Finally, it proposes to fill the gaps in privacy especially when the AI system initially appears friendly but becomes tracking. To support this outcome, the manuscript refers to biometric psychography expanding the concept of biometric data for AI systems.
dc.identifier.citationBulgakova, D., (2023). The Prohibited Artificial Intelligence Practice. Теорія та практика судової експертизи і криміналістики. Вип. 3 (32). С. 89—112. DOI: 10.32353/khrife.3.2023.06.
dc.identifier.orcidhttps://orcid.org/0000-0002-8640-3622
dc.identifier.urihttps://dspace.nncise.org.ua/handle/123456789/349
dc.language.isoen
dc.publisherНаціональний науковий центр «Інститут судових експертиз ім. Засл. проф. М.С. Бокаріуса»
dc.subjectsubliminal techniques
dc.subjectconsciousness
dc.subjectmaterial distortion
dc.subjectpsychological harm
dc.subjectbiometric psychography
dc.titleThe Prohibited Artificial Intelligence Practice
dc.typeArticle

Файли

Контейнер файлів

Зараз показуємо 1 - 1 з 1
Вантажиться...
Ескіз
Назва:
574-Article Text-1186-1-10-20231126.pdf
Розмір:
433.37 KB
Формат:
Adobe Portable Document Format

Ліцензійна угода

Зараз показуємо 1 - 1 з 1
Вантажиться...
Ескіз
Назва:
license.txt
Розмір:
16.25 KB
Формат:
Item-specific license agreed to upon submission
Опис:

Колекції