The European Patent Office's (EPO's) recent case law on the patentability of Core-AI in T 0702/20 reiterated their established stance that improved Machine Learning model architectures are considered unpatentable subject matter before the EPO.
However, the case has more to offer than just that, here are two points that will be influencing our drafting of Core-AI patent applications going forward.
For background, T 0702/20 relates to an improved neural network design having reduced numbers of connections between the nodes. The application uses "loose coupling" to reduce the number of computations and reduce over-fitting (see paragraphs 1-1.2 of the decision for more detail).
The precise mechanism behind this isn't too relevant for the purposes of this article, suffice to say that the technical problem solved by this design is described as being to reduce the computing power needed to train and run the modified neural network.
As this invention relates to a fundamental improvement in neural networks themselves (e.g. an improved model architecture), this is what is known in Europe as a "Core-AI" application. It was rejected by the EPO for relating to unpatentable subject on the grounds of it amounting to a mathematical method.
The first point of note is that in the first auxiliary request, the claims were limited to the neural network being implemented on a micro-computer (see point V of the Summary of the Facts and Submissions). This was meant to emphasise the limited resources of the computer and therefore the relevance of the small network size (paragraph 22 of the decision).
This is of interest because one category of AI invention that the EPO does consider to constitute patentable subject matter is Hardware-AI inventions which represent "specific technical implementations of a mathematical method where the mathematical method is particularly adapted for that implementation in that its design is motivated by technical considerations of the internal functioning of the computer system or network" Guidelines for Examination: G II 3.3. In other words, if you adapt a model to suit a particular hardware configuration, then this might be considered patentable subject matter.
The extent to which different types of invention might fall into this category is largely untested, however paragraph 14.1 of this decision notes that a technical effect isn't achieved just by means of reducing the storage and computational requirements of a neural network. To extrapolate this reasoning, this would seem to be the case even if the design is motivated to enable the neural network to function on a micro-computer.
Thus, adaptions that might arguably enable a model to function on a memory limited device (on which it couldn't function before) are unlikely to be sufficient to qualify as Hardware-AI inventions. The prospects of other modifications based on hardware limitations, e.g. modifications to a model to reduce battery consumption so that they the model can run on energy limited devices or similar, might also be looking a little bleaker in view of this decision.
The second noteworthy portion of the decision is paragraph 21: "...it would remain questionable whether the proposed loose connectivity scheme actually provides a benefit beyond the mere reduction of storage requirements, for instance a "good" trade-off between computational requirements and learning capability" and paragraph 21.1: "In terms of learning, the Appellant asserted that the new structure avoided overfitting, but did not justify this assertion" (Emphasis added).
Thus, T 0702/20 would seem to give credence to the idea that Experimental data is increasingly needed to demonstrate that a proposed AI invention actually achieves the technical effect and the benefit described in the application. Such Experimental data may need to be quite detailed, showing not only that the invention works per se, but also demonstrating an improvement over what has gone before (comparative data). This is likely to be the case where, due to the complexity of the technology, it isn't prima face evident what effect a particular modification would have.
Although certain types of experimental data can be submitted in prosecution to the EPO, it is highly preferable to include this in the drafting stage. Drafters should therefore bear in mind that advantages stated in their applications may not be taken on face value by the EPO for complex technologies such as Core-AI.
A point on sufficiency
Finally, there has been a lot of talk, and frankly angst, in the European patent community over sufficiency of disclosure of patent applications relating to Machine Learning models. One of the reasons that the sufficiency issue arises is due to the assumption that models such as neural networks operate in a "black-box" manner and might only be relied upon to produce a technical effect if the exact same architecture and exact same training data is used. On this basis, patent offices such as the UKIPO have toyed with the idea of raising sufficiency objections to applications unless they contain very detailed descriptions of models and suitable training datasets.
If the idea of neural networks being inherently unpredictable is bemusing to you, then it seems that the Board of Appeal might tend to agree: paragraph 16.1 says that "Whilst the functioning of a neural network may not be foreseeable prior to training and the programmer may not understand the significance of its individual parameters... the neural network still operates according to the programming of its structure and learning scheme... It is only the sheer complexity of a larger neural network that makes it appear unpredictable."
While I don't think the board meant to raise these comments with sufficiency in mind, I for one am glad that they have emphasised that although neural networks are complex, they are not inherently unreliable or unknowable black-boxes, and nor should we treat them as such.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.