HomeLatest FeedsTechnology NewsThe risk is high if there is no MI policy

The risk is high if there is no MI policy

The incorrect use of artificial intelligence (AI) can be a serious source of danger for companies from the point of view of data protection, copyright and consumer protection. At domestic organizations, the creation of workplace AI regulations is still in its infancy, even though they would be greatly needed.

This article was published in the March 13, 2024 issue of ComputerTrends magazine.

Assistive services using artificial intelligence, such as various chatbots, are also spreading explosively in the workplace. Answering the professional and ethical questions associated with the use of AI can be a challenge for companies in many cases. An important means of preventing problems is the development of company regulations for the use of AI. During the work, many questions arise, the answers to which are not self-evident, but at the same time, their correct handling is essential for the company’s safe business operations. Márk Szűcs, an expert at BDO Legal Jókay Law Firm, provides assistance to the organizations concerned by answering some important questions.

ComputerTrends: What should be the first step when a company faces the problem of using artificial intelligence? When is it advisable or essential to act?
Márk Szűcs: The answer depends to a large extent on the activities of the given company, as the services provided by the company basically determine what kind of AI services each employee uses. Let’s look at the most common chatbot, ChatGPT, for example. If the employees of an average company, for example, communicate with customers and do research work in order to provide the right answers, it makes sense for them to use these artificial intelligence services. When a company realizes that this can be beneficial for them, it is worth addressing the issue from then on. By the way, the workers will certainly start using the chatbot themselves – to make their work easier. It is important that they receive appropriate guidance from the company in order to use these platforms correctly and safely. The incorrect and disorderly use of artificial intelligence can be a potential source of danger for companies’ obligations to comply with data protection, copyright and even consumer protection legislation.

CT: Whose task is it to create the MI regulations?
Márk Szűcs: The employer is entitled to issue work-related instructions, so it is his responsibility to ensure the creation of the regulations. Although the employer knows the workings of his own company and the functioning of his work organization, it is likely that he does not have in-depth knowledge of either the technical or legal conditions of artificial intelligence. Thus, in order to be able to create a truly effective policy, it is advisable to contact consultants.

CT: How can compliance with the rules be checked?
Márk Szűcs: This is no longer a differentiated issue of the MI regulations, but a question concerning the control of employees’ work in general. But let’s look at ChatGPT for example! How can the employer check how the employee uses ChatGPT? You can observe, for example, what kind of e-mails you send out and whether you are doing your job well with them. Similarly to the handling of potential problems in Word, the employee must also check the correctness of the results of the translation or research work, for example, when using MI services. The employer must make the employees aware of what exactly they are responsible for and how they should handle this responsibility.

CT: Isn’t there a risk that the employer’s activities violate the GDPR during the inspection?
Márk Szűcs: This can be the same when controlling the use of artificial intelligence as with any other employer control. When observing employees, the employer must act clearly in accordance with data protection regulations. By themselves, regulations on the use of artificial intelligence cannot override them. Therefore, it is not possible to illegally look into private e-mails and telephones, and where appropriate, it is not possible to carry out illegal camera surveillance. It is clear that the internal regulations for the use of artificial intelligence must be included in the company’s internal rules for data protection regulations.

CT: Nobody exactly sees the future of the development and use of artificial intelligence. According to our current knowledge, how often will these regulations need to be revised?
Márk Szűcs: This is an interesting question because the development in the field of artificial intelligence is rapid. Existing features are evolving at a very fast pace, and new features are constantly being added. This makes things difficult for both employees and employers, as it is very difficult to keep up with such a rapid pace of development. It is essential that, in the workplace AI policy, the employer continuously provides employees with up-to-date information on what AI technical tools and options are available, and how they work. Of course, this is very difficult due to the aforementioned rapid development. That is why – as in the case of all internal workplace regulations – it is advisable to carry out continuous monitoring and continuous review of the MI regulations. We recommend a revision period of at least half a year.

CT: All rules can be broken. What tools are available to the employer if he is convinced of a violation of the MI regulations?
Márk Szűcs: In the case of the MI regulations – as in the case of all workplace regulations – the employer can resort to general labor law tools. Of course, the first step is always to determine the extent of the breach of responsibility, only then can the appropriate disciplinary tool be selected. It is very important that employees are informed in advance about possible sanctions.

CT: What is the situation in the reverse case, if the employee feels that his rights are violated in connection with the use of artificial intelligence?
Márk Szűcs: Of course, this is not excluded either. This could be the case, for example, if the employer gives the employee such instructions that he is forced to break the rules when using AI platforms. In such cases, of course, the employee also has the opportunity to take action against the employer’s instructions according to the rules of labor law.

CT: Where are domestic companies in the creation of regulations related to artificial intelligence?
Márk Szűcs: I would like to start by saying that in Western Europe and the United States, the topic is already very actively addressed, companies have started to create AI regulations. These not only highlight the potential risks of AI, but also establish a clear and transparent decision-making and responsibility framework for the entire organization. In Hungary, we have not seen many examples of this, nor have we experienced that employers would have recognized the importance of this. Everyone here is currently focusing on the AI ​​Act of the European Union, paying attention to what rules are formulated there. However, the AI ​​Act addresses the higher level of artificial intelligence platforms. The services and software that basically affect the higher risk areas. So-called simpler services such as chatbots are not really dealt with in depth by the AI ​​Act. This can also be explained by the fact that the attention of many market participants slips away from the fact that these “ordinary” services should also be regulated. At the same time, it is clearly visible that certain AI tools, such as chatbots, are very actively used by some companies. Employers make a mistake if they lose sight of the fact that their task is to guide the proper use of AI devices in the workplace.

What should the MI policy contain?

The content of the MI regulations – as in the case of other internal workplace regulations – is basically determined by the internal operating mechanisms and structure of the given organization, as well as the special characteristics and expectations that the company itself formulates in relation to its operation. At the same time, it is possible to identify issues and areas that must be addressed by an AI regulation. These may include:
– general characteristics of AI applications, possible application types and categories;
– legal and company regulations that must be taken into account in connection with the application of AI;
– possible legal risks, liability issues;
– data protection and security measures that the company implements in connection with the use of AI;
– when working, in what cases, in what way and what types of MI applications are allowed to be used;
– what are the cases and circumstances when the use of AI is prohibited;
– limitations and risks inherent in the use of AI at work;
– the question of responsibility: it is necessary to be aware that the employee using it is responsible for the content and service resulting from the use of AI, and is therefore obliged to check it in every case;
– naming the procedures and tools with which the employer can supervise the employees’ use of AI;
– the disciplinary consequences of violating the regulations and the relevant procedures;
– the order of internal trainings promoting employees’ AI awareness.

I am a tech enthusiast, cinema lover, and news follower. and i loved to be stay updated with the latest tech trends and developments. With a passion for cyber security, I continuously seeks new knowledge and enjoys learning new things.


Please enter your comment!
Please enter your name here

Must Read