HomeLatest FeedsTechnology NewsCCLab - The AI ​​meteor has exploded

CCLab – The AI ​​meteor has exploded


Generative artificial intelligence has hit the business world like a meteor, threatening dinosaur-like companies with extinction. However, rapid adaptation is made difficult by the fact that the technology that will soon decisively influence competitiveness is accompanied by new types of security risks.

After two and a half years of preparatory work, the Presidency of the European Council and the negotiating group of the European Parliament reached a provisional agreement at the beginning of December on the proposal on harmonized rules for artificial intelligence, the so-called AI Act. The purpose of the draft regulation is to ensure the safety of artificial intelligence systems placed on the European market and used in the EU, as well as their application respecting fundamental rights and EU values.

We asked Tamás Ferenc Molnár, the founder and CEO of CCLab, known for its cyber security testing, inspection and certification (TIC) services, about the expected impact of the regulation, which is taking an increasingly decisive form.

Computerworld: According to the legislator, the world-first law in its category will also stimulate investments and innovation related to artificial intelligence in Europe. How do you see this through the eyes of a cybersecurity compliance assessor at this stage of the process?

Tamás Ferenc Molnár: We welcome the fact that the EU has chosen a risk-based approach for its upcoming AI law, and that the introduction of systems with the highest risk level to the market is subject to strict validation. This is reminiscent of the mature regulation, based on which we also work when evaluating medical devices or, for example, biometric identification systems related to digital signatures.

Generative artificial intelligence has exploded in the business world to such an extent that its power can be compared to the impact of the asteroid that destroyed the dinosaurs – the technology that has a decisive effect on market competitiveness develops so quickly that organizations do not have time for gradual, evolutionary adaptation. It is vital that companies start searching for and seizing the new opportunities accompanying the overwhelming change without delay – but this is made difficult, among other things, by the new types of risks that artificial intelligence also carries with it.

Fresh regulation is necessary for their management, but its impact will be global and, in addition to the advantages, it may also result in a competitive disadvantage. The GDPR, which raised similar dilemmas, in any case brought positive experiences, because after its entry into force, other economic regions of the world also introduced similar regulations, thus proving that worldwide problems can be handled with a global consensus. However, the impact of AI is greater, more complex and more sudden than that of data management. The practice of the near future will show what effect the AI ​​regulation, which is being prepared in the United States and the United Kingdom in addition to the Union, will achieve.


Of course, there will always be market players who turn to markets that regulate the use of technology less, as well as cybercriminals who don’t care about the law, who already actively use artificial intelligence technologies, including generative AI, in their phishing and other types of campaigns.

CW: Cyber ​​defense vendors are also building these capabilities into their security solutions. But what about the users’ AI skills?

Tamás Ferenc Molnár: Since artificial intelligence learns from data, by manipulating it both external and internal attackers can cause damage to the targeted company. The new types of risks accompanying AI are characterized by the fact that intentionality is not absolutely necessary, in the absence of sufficient competence or attention, the employees themselves can make mistakes when teaching the models, which can have similarly serious consequences.

All of this will have an impact on detection and prevention, companies must develop a data management environment and practices that help reduce such risks. As part of the defense, they must ensure the education of colleagues working with AI solutions, and in general, the development of the company’s data culture and security awareness.

CW: How will the EU AI law affect TIC services?

Tamás Ferenc Molnár: We plan to help our customers prepare products equipped with AI capabilities for testing, inspection and certification with training materials based on the implementation regulations and relevant standards. We are planning to release a real educational material that provides more than simple information, which is expected towards the end of next year or the beginning of 2025 as the legislative and standardization process progresses. We will also introduce new AI capabilities to automate our services, and our prototypes promise a further significant increase in efficiency in this area.

We are also preparing to finalize another law, the European cybersecurity certification system based on common criteria (EUCC), the first system developed in the certification framework of the EU cybersecurity legislation, which will be postponed from this year to 2024 due to the amendment proposals received during the preparation. As members of various professional organizations, the QIMA group and CCLab also participate in this work, so that the framework is prepared in the appropriate quality and promotes the safe operation of critical importance and other categories of digital products and services that determine the lives of both companies and the population. use.

Mr.Mario
Mr.Mario
I am a tech enthusiast, cinema lover, and news follower. and i loved to be stay updated with the latest tech trends and developments. With a passion for cyber security, I continuously seeks new knowledge and enjoys learning new things.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read