The Nov. 6 digital edition of Rzeczpospolita Daily featured my article urging people not to set up a public money-burner in the form of the AI Commission and to transfer the authority to oversee high-risk artificial intelligence to the Polish Data Protection Authority (PUODO).
On November 20, a polemical text appeared there, written by attorney Przemyslaw Sotowski previously working for the Polish Ministry of Digitization, in which Mr. Sotowski cites a number of interesting theses. Theses that I cannot leave unanswered, because they balance between demagoguery and disinformation
Some difficulty in polemicizing with the author is the fact that out of a dozen of his theses, only the thesis that the Commission could be called differently corresponds to reality, and none of them contains justification. So much so that in one case he refers to the prevailing argument of "The Obvious.". As if the author were more of a politician than a lawyer. On page 51 of our bestselling Little Book on Drafting Pleadings, such a style of writing is called "biblical," and described with these words, "We are not superhumanly infallible Polish prosecutors or politicians to indulge in arguments from The Obvious..., unless we have nothing factual to use."
So let the motto of my rejoinder be what Ronald Reagan said on August 12, 1986, "The nine most terrifying words in the English language are 'I'm from the government and I'm here to help'".
The polemic raises the following claims: (1) that it would be better for the AI Commission to be called the AI Supervision Commission and not the AI Development and Safety Commission; (2) that a supervision authority that does not punish but leads the market is better (the leadership role of supervision?); (3) that we will find The Artificial Intelligence Development Fund under the Christmas tree; (4) that the new authority is an opportunity for competence (this is such a diplomatically expressed opinion about the competence of the PUODO, of course, we are talking about USSR-style diplomacy - "Embassy of the Soviet Union accepts the blame for the third farting of Her Majesty"); (5) that this new AI Commission will seek for some other breakthrough technologies to regulate them; (6) that the PUODO protects people's subjective rights and the Commission is supposed to exercise "market surveillance," and that's supposedly something different; (7) that there are no studies that artificial intelligence processes personal data; (8) that a huge part of the data processed by artificial intelligence is not personal data at all, but the opposite; (9) that no one wants personal data; (10) that because no one wants personal data, privacy protection techniques are developing; (11) that artificial intelligence mainly processes personal data for makeup advice (yes, that's what the author cliaims); (12) that it is obvious that handing over the supervision of artificial intelligence to the PUODO would "transfer the same practices" from personal data to artificial intelligence; (13) that privacy and data protection rights are a small part of the EU Charter of Fundamental Rights, which are reflected in the EU Artificial Intelligence Act (AIA); and (14) that privacy rights aren't better than other rights.
The key statement, however, is the following formulation of the polemic "This shows, however, that the author [meaning Maciej Gawronski] mistakenly equates market surveillance with the protection of fundamental rights."
Well, I am afraid that it is the author of the polemics who does not understand what the Artificial Intelligence Act is all about.
Yes, protection of fundamental rights. Well, yes, the essence of overseeing the use of artificial intelligence in the EU market is to protect the fundamental rights of individuals from the risks to them resulting from the use of these technologies by EU market participants.
AIA regulates the use of HIGH RISK artificial intelligence. It's about risk to BASIC RIGHTS. Whose fundamental rights? The fundamental rights of individuals. And that's exactly what GDPR is about. GDPR is supposed to protect rights and freedoms and oversee risks to the rights and freedoms of individuals. AIA is supposed to create supervision of AI risks to fundamental rights. What is the difference? From this perspective, none. Surveillance is not supposed to focus on which technology provider will make more money in our market, nor that it is fair competition. Surveillance is supposed to protect individuals from the risks to their fundamental rights from the use of high-risk artificial intelligence.
Risks to individuals. The misunderstanding of AIA's objectives is apparently related to a misunderstanding of GDPR itself. I recommend our bestselling "Guide to the GDPR" the most popular book on GDPR in Poland to learn about GDPR. GDPR is not about privacy risks at all. It is about (all) risks to human rights and freedoms stemming from privacy violations. Those risks could be risks of personal injury, death, catastrophe, loss of job, manipulation, restriction of civil rights, surveillance, fraud, denial of medical benefits or reimbursement, etc. And these are the risks that the EU Artificial Intelligence Act is all about. Even a vaguely attentive reader will see AIA's invocations of GDPR where it talks about risk and impact assessment.
Yes, risk management is much older than GDPR, like I am much older than the author of the polemic. What is new about risk management from the perspective of GDPR, however, is that it assesses risk to a third party - individuals - rather than to an organization, and assesses risk in absolute terms (very vague, of course, but that's a different issue).
Compliance system. GDPR is based on the concept of distributed, market-dedicated risk management and mainly follow-up supervision of how those risks are handled. However, the GDPR itself already includes elements of a preventive system, on which AIA is based. These include codes of conduct and certification. PUODO has already made significant strides in codes of conduct and certification. Data protection is the only area (aside from elements of non-standardized critical infrastructure) that requires the creation of a compliance management system (risk assessment, quality management and certification), of all the areas regulated by AIA. Other areas (various machines, toys, radio devices, etc.) already have their compliance management systems. PUODO is setting up such compliance systems right now, based on prerogatives from GDPR and experience from GDPR application practice as well as acquired market knowledge. Edison, who came up with the idea of entrusting the creation of alternative artificial intelligence compliance systems in data protection to a body competing with PUODO, should receive a Darwin Award.
Yes, a forest is not the same as a forest of crosses (a reference to a popular Polish comedy "Nothing Funny"/"Nic śmiesznego"). However, the point of this analogy goes the other way. The regulation of artificial intelligence is aimed precisely at preventing the forest from turning into a forest of crosses. And GDPR, to cite Jan Nowak ex President of PUODO, is not an act on personal data, but an act on the protection of individuals. AIA is also not an act to protect corporations developing using artificial intelligence just an act to protect individuals. To see this, however, you have to take off your Scrooge McDuck glasses.
At the same time, it is important to realize that, as the report "We have no science of safe AI" by David Janků, Max Reddel, Roman Yampolskiy, Jason Hausenloy emphatically derives, there are no good safety standards for artificial intelligence. The system created by AIA is procedures not filled with content in the area of personal data and in this area the most dangerous. Why should a body with no experience and no understanding that it is precisely the protection of fundamental rights get supervision and not a body that specializes in this very area?
Does AI process much personal data? The author of the polemic argues that there are many AI systems that do not process personal data. And because of this, privacy techniques such as synthetic data, differential privacy, and "confidential computing" are developing. I'm afraid there is an oxymoron hidden in these two sentences. Sentence No. 2 excludes sentence No. 1. There would be no need for privacy techniques if these mythical, supposedly non-personal-data-processing systems really did not process personal data.
The lovely "no research" argument. The author of the polemic goes on to claim that "there is no research" that AI processes personal data except for makeup advice. This level of discussion is the level of sophistry familiar from the movie "Thank you for smoking". When you add to this the nonsense that no one wants or likes personal data, the level already drops with the bang of a smashed piano.
The author argues that personal data is most often processed where the specifics of the product or service and the customer's need require it - for example, to match clothes or makeup to a particular person, with their knowledge and consent. Well, if this is the state of awareness and knowledge of the operation and modern applications of artificial intelligence throughout the Polish Ministry of Digitization and the AI Working Group, then I am not surprised that by decree they wanted to amend AIA and parliamentary acts. The only thing that worries me is that there is such a high proportion of lawyers in this group who deal with data protection on a daily basis.
The claim that AI processes personal data mainly for makeup matching is frivolous. Above all, it tells us that those proposing the creation of the AI Commission have not read the EU Artificial Intelligence Act. Contrary to the polemic author's suggestions, AIA does not regulate makeup selection by means of AI.
Netflix is making movies, academics are writing books about how Silicon Valley stands on collecting all kinds of personal data. Steve Jobs' famous phrase "Silicon Valley is not monolithic, We've always had a very different view of privacy than some of our colleagues in the Valley." only confirms this. Well, but as you can see, there are those who continue to cave on this issue. And it is not, by any means, a Platonic cave.
Well, but let's get back to that research, which supposedly doesn't exist. Let's start by reviewing the first technology company on A - you'll find mostly examples of AI being applied to personal data processing - in marketing, customer experience, banking, people management, data search, entertainment, etc.
Throw in Google or some AI usage examples and out of 22, only 3 (agriculture, robotics and astronomy) are not directly related to personal data processing. The other 21 are: e-commerce, education, lifestyle, navigation, natural speech, image recognition, facial recognition, human resources, health, gaming, automobiles, social media, marketing, chatbots, finance, cyber security, Travel and transportation, entertainment1. But but...
Examination of the Scripture. The spirit of the researcher should make the author of the polemic first examine the Scripture, or in this case AIA itself. So what does this EU Regulation regulate? The Artificial Intelligence Act regulates such AI that may pose a risk to PHYSICAL PERSONS (NOT CORPORATIONS). The high-risk AIs regulated by the AIA are:
- First, systems that process personal data prohibited because of the threat to the rights and freedoms of individuals (Article 5);
- Secondly, systems that threaten physical danger TO PERSONS in Annex I. Annex I of the AIA lists here: machinery, toys, recreational watercraft, cranes, protective systems in explosive atmospheres, radio equipment, pressurized equipment, cableways, personal protective equipment, equipment that burns gaseous fuels, medical devices, civil aviation, vehicles, marine equipment, railroad interoperability, motor vehicles.
All of these areas already have compulsory compliance management systems in place;
- Third, systems that threaten the rights and freedoms of individuals listed in Annex III. Annex III lists: Biometrics (emotion recognition, remote identification, profiling of individuals) , critical infrastructure (digital, traffic, media), vocational education and training (school admissions, academic progress assessment, examinations, download detection), employment and management of employees and access to self-employment (recruitment, promotion decisions, dismissal, job evaluation), eligibility of individuals for health and other public benefits by the administration, as well as to request reimbursement of such benefits, assessment of creditworthiness, assessment of insurance risks and prices, assessment of the priority of emergency calls, prosecution of crimes (assessment of the risk of victimization, detection of lies, assessment of evidence, assessment of the likelihood of recidivism, profiling of criminals, variographs in immigration, assessment of the risks of migrants including to health, processing of asylum applications, recognition of migrants, judiciary, influencing the decisions of voters.
Out of this several-dozen-part enumeration, even traffic and utility delivery involve significant processing of personal data, and the rest involve processing of personal data.
- Fourth, deep fakes (which can mislead individuals as well as use individuals' data),
- Fifth, general-purpose models that can make basically any kind of bigotry and still peep at and profile their users (i.e., process their personal data as well as collect other people's personal data), which of course they do.
So if a member of the AI Working Group at the Polish Ministryt of Digitization claims that artificial intelligence processes personal data only by helping with makeup, and only with the user's consent, then I'm not surprised that the Ministry preferred to set up its own body where such rantings can be gleefully repeated.
What's the point, what's the point... If you don't know what's the point... From leaked information from the work on the "implementation of the AIA" in Poland, as well as based on common sense, it's clear that the pushback is all about crippling supervision and turning it into a handout of public money to perpetually hungry tech companies
The concept of a "supervision chief" who, like Ulrich von Jungingen (a reference to the Master of the Teutonic Order leading a charge against Poles and Lithuanians during largest medieval battle – the Battle of Grunwald in 1410), will lead an onslaught of "Polish" artificial intelligence around the world, pelting technology companies with billions from the Artificial Intelligence Fund like that sultan his eunuchs with challah and nuts, sounds like something straight out of a comedy. We will have an AI Commission to the best of our ability. One that will lipstick the mouths of artificial intelligence for 7 million a year.
Attorney Satowski raises the argument that public opinion will defend the AI Commission from degeneration. The media and public opinion essentially have the kind of influence in this joke: Are you stealing from the budget? Yes, Why, aren't you afraid that someone will find out? Well, you just found out, and what?
Add, according to the author of the polemic, the new AI Commission is supposed to be a modern competence centre "and a starting point for noticing more disruptive technologies." Is this like the Polish Ministry of Labor noticing AI and wanting to protect professions from it? That would start with the lift attendant.
But to argue the rise of the AI Commission on the grounds that it will seek new areas of regulation? Space elevators, perhaps? Helium mining on the moon? Genetic recombination? Quantum encryption? This patent for seeking new areas of regulation, the authors of the bill should push it off to Ursula von der Leyen if she wouldn't be a master in this game already.
But wait, wait. This isn't about fisticuffs in the form of thirty million zlotys (EUR 7M), it's about that fabled billion in the Artificial Intelligence Fund, which, ladies and gentlemen, the Polish Development Fund is supposed to create together with the Ministry of Digitization, the Ministry of Science, the Ministry of National Defense, the Ministry of Science and Higher Education, the National Research and Development Center, the National Science Center and the Bank Gospodarstwa Krajowego on the basis of a letter of intent signed in November. Nota bene with this billion (which, by the way, is not there) officials can at most spill in someone's eyes and hope that it will drip back to them in one way or another. No amount of money can offset the financial advantage of Silicon Valley or the US in general. And the argument that we have a fictitious supervisor here will at best take away confidence in the Polish state as an institution. Let's leave to the Baltic States, who mastered that game anyway. In general, the whole concept of siphoning off public money mainly serves those who distribute it. Stefan Kawalec and Ernest Pytlarczyk write about the uselessness of fiscal transfers (well, unless for those who manage them) in their book "The Paradox of the Euro. How to get out of the common currency trap" on pages 120 ff, concluding, "...fiscal transfers alone do not at all accelerate the economic development of poorer countries and regions."
All the more so because whatever this Commission tells itself and does, PUODO can and will still issue decisions in relation to the use of any technology that processes personal data. So there is a real Machiavelli of departmental Poland behind the concept of producing such a PR body as this AI Commission is supposed to be.
Fart in the elevator. There is an English saying "fart in the elevator", meaning to say something unpleasant that spoils the atmosphere of patting each other on the back, but it is, unfortunately true. And I am just here to indulge in something like this again. Well, the establishment of the AI Commission raises yet another risk – revolving door type behavior that has led to the systemic corruption of the entire market surveillance system in the US, with pharmaceutical surveillance at the forefront. It's all about the "back and forth" flow of people between the supervisor and the market being supervised. It is a known fact that basically no one in the private sector can afford to become a civil servant and tell their family that the family budget is shrinking tenfold from now on. Therefore, working in the public sector is either a mission or an investment. If a mission, it's for life. But if it is an investment, it has to pay off. We know such cases that it has paid off. I suggest that, as in the case of the Food and Drug Administration, at once let the heads of IT companies share tenure in that office.
Only this shouldn't be done in such a way that, working in the public sector as part of our duties, we create solutions so that we can pony up for it later. What is this subsidy for? For the Maldives? And who is supposed to be responsible for it? The supervisor? I mean, the idea is to keep it out of the way? Like those who make a gate on the way to a wedding? From this perspective, it is indeed, PUODO never performed. Maybe that's why the authors of the AI supervision bill don't like the vision of transferring PUODO practice to the AI field.
Lord, you don't know who I am! I have left for dessert the first allegation of the author of the polemic. I am very pleased that the author of the polemic called me to the board personally "accusing" me of being a data protection lawyer and not a lawyer for cyber security, intellectual property or, as he put it, a lawyer dealing with "responsible creation and use of AI." For a fair response to the substantive issues, I can, in good conscience, speak on my favorite topic, which is my own
I am first and foremost an IT lawyer. Some even call me an "IT guru." I have been involved in IT law since 1996, I was the lead lawyer for the country's largest IT project for a dozen years from 2000, and I am also currently the lawyer for the country's largest IT project. I have been responsible for hundreds or thousands of IT cases, and I am the main contact for IT teams at three companies in a row where I have worked. Regulators have copied my wording from IT contracts into their recommendations. Foreign and domestic rankings have listed me in this category since 2006.
I am a cloud computing lawyer. I have been involved in cloud computing since 2009, and I edited a study by the Forum of Banking Technologies of the Polish Bank Association "Cloud computing in the Polish banking sector. Regulations and standards. 2011", which has been praised many times in Poland and abroad. I dare say that my leading competitors started their adventure with cloud computing based solely on references from my own projects. Oh, and as an Expert of the European Commission on Cloud Computing Contracts, I proposed a number of solutions that eventually made their way into ...GDPR.
I am a cyber security lawyer. I have been involved in legal aspects of business continuity and information security since 2006, and have been responsible for security policies and regulations for operational events in banks and other financial institutions. I implemented ISO 27001 in my own organizations, audited various institutions for operational security procedures, data protection, compliance with the Polish Cyber Security System Act. I have also evaluated such specific emergency response procedures as anti-terrorism operations.
I am an intellectual property lawyer. I deal with soft IP and hard IP. Someone with more seniority may still remember that as head and founder of Bird & Bird in Poland, I represented clients in many patent disputes. For some reason, I have the most skill endorsements (161) on LinkedIn just from intellectual property. International rankings have listed me in intellectual property, patent litigation and franchising since 2011.
Yes, I am a data protection lawyer. I edited the best book on the market on the subject, No. 1 of Wolters Kluwer Polska sales 2018 and 2019, published by Wolters Kluwer International in 2019 as "Guide to the GDPR". I was honored by the President of the Office and the Office for Personal Data Protection with the M Serzycki Award. I was an expert of the Article 29 Working Group, and I am a supporting expert of the European Data Protection Board. Current international rankings focus on this aspect of my activity.
I am, mind you, an artificial intelligence lawyer. I have always been interested in futuristics and science fiction. I have read several thousand titles on the subject since 1982. So I started to be interested in artificial intelligence about 5 years before the author of the polemic was born. I have commented on the legal aspects of AI at least a dozen times so far at conferences and in articles (including those published by PUODO and the Chancellery of the Polish Prime Minister) as early as 2019.
By the way, I'm also a dispute lawyer (with Piotr Biernatowski, also from GP Partners, we wrote the most popular book of 2022 on this subject called "The Little Book on Drafting Pleadings", available in English on Amazon as well) and my first official job started in taxes. Aaah, then there is competition law. There are some competition law experts who copied an entire lecture (in French) from me.
One more thing, I am also a corporate lawyer. And as a corporate lawyer, I recently commented in the pages of Rzeczpospolita on the controversial practice of persecuting managers of state-owned companies.
Summary. Elon Musk reiterates that regulation is strangling American innovation, also showing an example of the destructive effect of regulation on the competitiveness of the EU economy (Regulations are strangling American innovation. If we don't get this under control, our economy will stagnate). The idea that artificial intelligence should additionally be supervised by a new body is a perfect example of overregulation. Let's remember, by the way, what are the effects of lack of regulation combined with de facto self-regulation (i.e., anti-competitive warfare using its people at the regulator) in the US - a destroyed environment, a shortening life expectancy, a society addicted to opioids and other "drugs," etc.
AIA does not take away PUODO's authority to oversee AI processing of personal data, which is where the real battle between competitiveness, security and formalism will be fought. To make things "funnier," Article 74(8) of the AIA directly entrusts PUODO with supervision of high-risk AI systems used in law enforcement, immigration, justice and democracy. So Poland is not facing a choice of "PUODO OR the AI Commission," but a choice of "[PUODO] OR [PUODO AND the AI COMMISSION]."
The idea that two parallel supervisory bodies instead of one would better promote competitiveness could only have been born in a civil servant's head.
How is the concept of two regulators with a powerful set of overlapping authority instead of a single supervisor supposed to serve the purpose of deregulation rather than overregulation? Only a civil servant could come up with that. Supervision of all AI should go to the Polish Personal Data Protection Office. The Office is acting in a prudent, non-redundant and non-overzealous manner. Actually, after a year and a half, I would expect the Authority to finally make some kind of decision in our ChatGPT complaint, and not blame the delay on perpetual consultations.
Let us use the words "Deny. Defend. Depose" found on the bullet casings that a few days ago killed Brian Thompson, the CEO of United Health, an insurance company known for its use of the nH Predict AI algorithm, which is accused of a 90 percent error rate in rejecting claims by elderly people for therapies prescribed to them by their doctors. As you can see, Brian Thomson's murderer has made AI's processing of personal data the crux of the matter.
The Polish quarterly New Technologies Law will soon publish a joint article by Jakub Rzymowski, Dominik Spalk and Maciej Gawronski on the ostensible nature of bans on the use of AI for certain activities. The working title of the article is "AI-act Article 5 prohibitions as apparent prohibitions, or if you can't, but you want to, you can."
Footnote
1. https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/artificial-intelligence-applications
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.