- within Media, Telecoms, IT, Entertainment and International Law topic(s)
- with readers working within the Pharmaceuticals & BioTech and Telecomms industries
The China National Intellectual Property Administration (CNIPA) recently revised the Guidelines for Patent Examination (hereinafter referred to as "the Guidelines"). The revised Guidelines will take effect on January 1, 2026. The main contents of this revision are summarized as follows:
1. Dual-filing Strategy
Regarding invention and utility model applications filed under dual-filing [or both-filing] strategy, the original provision in the Guidelines stated: Where an applicant files both a utility model patent application and an invention patent application for the same invention-creation on the same day (referring only to the filing date), if the utility model patent obtained earlier has not yet terminated and the applicant has made separate statements at the time of filing, double-patenting issue may be avoided not only by amending the invention patent application but also by abandoning the utility model patent. Therefore, during examination of the aforementioned invention patent application, if the application meets other conditions for granting a patent, the applicant shall be notified to make a selection or amendment.
This provision has now been amended to: Where an applicant files both a utility model patent application and an invention patent application for the same invention-creation on the same day (referring only to the filing date), in accordance with Rule 47 of the Implementing Regulations of the Patent Law, separate statements shall be made at the time of filing, indicating that another patent application has been filed for the same invention-creation; failure to make such statements shall be handled in accordance with Paragraph 1 of Article 9 of the Patent Law, which provides that only one patent may be granted for the same invention-creation; where statements have been made, if the invention patent application is examined and no grounds for rejection are found, the applicant shall be notified to declare abandonment of the utility model patent within a prescribed time limit. Where the applicant declares abandonment, a decision to grant the invention patent shall be made, and the applicant's declaration of abandonment shall be published together with the announcement of the grant of the invention patent. Where the applicant refuses to abandon, the invention patent application shall be rejected; where the applicant fails to respond within the time limit, the invention patent application shall be deemed withdrawn.
Through the above amendment, an invention patent can only be obtained by abandoning the utility model patent; applicants will no longer be able to obtain an invention patent by amending the invention patent application.
2. Examination of Invention Patent Applications
Involving Algorithm Features or Business Rule and Method Features
Such as Artificial Intelligence and Big Data
2.1 "Examination Criteria"
The original provision stated that "examination shall be directed to the solution as claimed, i.e., the solution defined by the claims." This has been amended to add "when necessary, examination shall also be directed to the content of the description."
Additionally, a new subsection "Examination under Paragraph 1 of Article 5 of the Patent Law" has been added, stating: " For invention patent applications that contain algorithmic features or features related to business rules and methods, if among others, data collection, label management, rule setting, or recommendation and decision-making therein includes content that violates laws, social morality, or impairs public interests, then such invention patent applications cannot be granted pursuant to the provisions of Paragraph 1 of Article 5 of the Patent Law."
2.2 "Examination Examples"
2.2.1 Addition of Provisions and Examples on Exclusion from
Patentability for Inventions Contrary to Law, Social Morality, or
Public Interest
New content is as follows:
(1) An invention patent application that contains algorithmic features or features related to business rules and methods shall not be granted if it violates laws, social morality, or impairs public interests.
[Example 1]
A Big-Data-Based In-Mall Mattress Sales Assistance
System
Summary of the Application Content
The solution proposed in this invention patent application is a
big-data-based in-mall mattress sales assistance system, which
utilizes a camera module and a facial recognition module to collect
customers' facial feature information and obtain their identity
recognition information. Through data analysis of the collected
information, it assesses customers' true preferences for
mattresses, thereby assisting merchants in precise marketing.
Claim of the Application
A big-data-based in-mall mattress sales assistance system,
comprising a mattress display device and a management center,
characterized in that:
the mattress display device includes a control module and an
information collection module, which are used to display and assist
in the sale of mattress products and collect customer data; the
control module is used for data interaction with the management
center; the information collection module includes a camera module
and a facial recognition module, which are used to collect
customers' facial feature information, adjust facial postures
using keypoint detection algorithms to obtain normalized facial
images, locate facial regions to be recognized in the normalized
facial images through facial detection algorithms, and extract
facial features within the facial regions using principal component
analysis, thereby obtaining customers' identity recognition
information;
the management center includes a management server and an
analytical assistance system; the management server manages a
plurality of mattress display devices; the analytical assistance
system analyzes data collected by the mattress display devices
based on customers' identity recognition information to
determine their true preferences and provides feedback on analysis
results to the management center.
Analysis and Conclusion
According to relevant provisions of the Personal Information
Protection Law of the People's Republic of China, the
installation of image collection and personal identity recognition
equipment in public places should be necessary for maintaining
public safety, comply with relevant national regulations, and be
accompanied by conspicuous signs. The collected personal images and
identity recognition information can only be used for the purpose
of maintaining public safety and shall not be used for other
purposes, unless individual consent is obtained separately.
From the solution claimed in this invention-creation, it can be
seen that the use of image collection and facial recognition
methods for precise mattress marketing in commercial venues such as
shopping malls is not necessary for maintaining public safety.
Furthermore, to obtain and analyze customers' true preferences
for mattresses, the collection of their facial information and
acquisition of their identity recognition information are
apparently conducted without the customers' awareness. The
application also does not indicate that data acquisition or
information collection is legal and compliant. Therefore, this
invention-creation contradicts the law and, pursuant to the
provisions of Paragraph 1 of Article 5 of the Patent Law, cannot be
granted a patent.
[Example 2]
A Method for Establishing an Emergency Decision-Making Model for
Autonomous Vehicles
Summary of the Application Content
The solution proposed in this invention patent application is a
method for establishing an emergency decision-making model for
autonomous vehicles. This method utilizes pedestrian gender and age
as obstacle data and employs a trained decision-making model to
determine the protected entity and the entity to be struck in
situations where obstacles cannot be avoided.
Claim of the application
A method for establishing an emergency decision-making model for
autonomous vehicles, characterized by comprising the following
steps:
acquiring historical environmental data and historical obstacle
data of an autonomous vehicle, wherein the historical environmental
data includes the vehicle's driving speed, distance to
obstacles in the same lane, distance to obstacles in adjacent
lanes, motion speeds and motion directions of obstacles in the same
lane, motion speeds and motion directions of obstacles in adjacent
lanes; the historical obstacle data includes pedestrian gender and
age;
extracting features from the historical environmental data and
historical obstacle data to serve as input data for the
decision-making model, and using historical driving trajectories of
the vehicle when obstacles cannot be avoided as output data for the
decision-making model, and training the decision-making model based
on historical data, wherein the decision-making model is a deep
learning model;
acquiring real-time environmental data and real-time obstacle
data, and when an autonomous vehicle encounters a situation where
obstacles cannot be avoided, utilizing the trained decision-making
model to determine the driving trajectory of the autonomous
vehicle.
Analysis and Conclusion
This invention-creation relates to a method for establishing an
emergency decision-making model for autonomous vehicles. Human
lives possess equal value and dignity, regardless of age or gender.
In accidents where obstacles cannot be avoided, if an emergency
decision-making model for autonomous vehicles selects the protected
entity and the entity to be struck based on pedestrian gender and
age, this contradicts the public's ethical and moral perception
of equality for all in the face of life. Furthermore, such a
decision-making approach would reinforce existing gender and age
biases in society, raise public concerns about the safety of public
transportation, and undermine public trust in technology and social
order. Therefore, this invention-creation contains content that
violates social morality and, pursuant to the provisions of
Paragraph 1 of Article 5 of the Patent Law, cannot be granted a
patent.
2.2.2 Addition of Examples on Inventiveness Examination
New examples are as follows:
[Example 18]
A Method for Identifying the Number of Ships
Summary of the Application Content
The invention patent application proposes a method for identifying
the number of ships, which acquires image data of ships and trains
a detection data model through deep learning to address the
technical problem of accurately identifying the number of ships in
a given sea area.
Claim of the Application
A method for identifying the number of ships, characterized by
comprising the following steps:
acquiring a dataset of ship images and preprocessing image
information within the dataset, marking the positions and boundary
information of ships in the images, and dividing the dataset into a
training dataset and a testing dataset;
conducting deep learning using the training dataset to construct a
training model;
inputting the testing data into the training model for training to
obtain ship testing result data;
multiplying the ship testing result data by a preset error
parameter to determine the actual number of ships.
Analysis and Conclusion
Reference 1 discloses a method for identifying the number of
fruits on trees and specifically discloses steps such as acquiring
image information, marking the positions and boundaries of fruits
in the images, dividing datasets, model training, and determining
the actual number of fruits.
The difference between the solution proposed in the invention
patent application and Reference 1 lies only in the different
identification targets. Although ships and fruits differ in terms
of appearance, size, and the environments in which they exist, for
those skilled in the relevant technical field, the steps required
for identifying the actual number, such as marking information,
dividing datasets, and model training, all pertain to the
positional relationships of the objects to be identified in the
images. The claims do not reflect any modifications made to the
training methods, model hierarchy, etc., in the deep learning or
model training process due to the different identification targets.
Marking ship data in images and marking fruit data in images to
obtain datasets for training and conducting model training do not
involve any adjustments or improvements to the deep learning, model
construction, or training process. Therefore, the claimed technical
solution of the invention does not possess inventiveness.
[Example 19]
A method for establishing a neural network model for grading of
scrap steel grades
Summary of the Application Content
During the collection and storage, scrap steel requires grading
based on the average size of the steel pieces. However, due to the
random stacking and overlapping of scrap steel, manual measurement
and grading are inefficient and often inaccurate. This invention
patent application proposes a method for establishing a neural
network model for scrap steel grading. The grade classification
neural network model with grade classification output is formed
through convolutional neural network learning, which can
significantly improve both the efficiency and accuracy of scrap
steel grading.
Claim of the Application
A method for establishing a neural network model for scrap steel
grade classification, wherein the model is used for grading
collected and stored scrap steel, comprising:
acquiring a plurality of images, determining different scrap steel
grades for the plurality of images, preprocessing the images,
extracting image data features of different grades, performing
convolutional neural network learning on the extracted image data
features of different grades to form a grade classification neural
network model with grade classification output;
extracting the image data features is extracting the set of
convolutional calculations performed by a convolutional neural
network on the pixel matrix data of the images, which includes:
extracting the color, edge features and texture features of objects
in the images, as well as the correlation features between edges
and textures of objects in the images, from the output set of
multiple lines composed of convolutional layers or convolutional
layers plus pooling layers;
wherein, the extraction of color and edge features of objects in
the images is implemented by an output set of three lines composed
of convolutional layers plus pooling layers, including from left to
right a first line with one pooling layer, a second line with two
convolutional layers and a third line with four convolutional
layers; the extraction of texture features in the image is
implemented by collecting the extraction results of color and edge
features of objects in the aforementioned images, and then by the
output set of three lines composed of convolutional layers,
including from left to right the first line with zero convolutional
layers, the second line with two convolutional layers and the third
line with three convolutional layers;
the number of lines for convolutional layer calculation used for
extracting correlation features between edges and textures is
greater than the number of lines for convolutional layer
calculation used for extracting color, edge and texture features of
objects in the images.
Analysis and Conclusion
In order to solve the problem that recycled resources come from
complex sources, include many types, and exhibit large material
differences, and to accurately identify whether scrap steel belongs
to material beans, stamping leftovers, bread iron, or some other
category so as to improve the recycling rate of recycled resources,
Reference 1 provides a method for classifying scrap steel types
based on a convolutional neural network model. Reference 1
specifically discloses the relevant steps of: acquiring a plurality
of image data of determined scrap steel types, preprocessing the
image data to extract features, and training with a convolutional
neural network to obtain the product model.
The difference between the solution of the invention patent
application and Reference 1 lies in the different training data and
extracted features, and the different number of lines and
hierarchical settings of the convolution layer and the pooling
layer. Compared with Reference 1, it is determined that the
technical problem actually solved by the invention is how to
improve the accuracy of scrap steel grading. Reference 1 performs
feature extraction and model training using the image data of scrap
steel with determined types, while the invention patent
application, in order to grade scrap steel according to its average
size, needs to identify the shape and thickness of scrap steel
according to the chaotic and overlapping scrap steel images. In
order to extract features such as color, edge and texture of scrap
steel in the images, the number of lines and hierarchical settings
of convolutional layers and pooling layers are adjusted during the
model training. The above algorithm features and technical features
support each other functionally and have an interactive
relationship, which can improve the accuracy of scrap steel
grading, and the contribution of the algorithm features to the
technical solution should be considered. The above-mentioned
contents such as adjusting the number of lines and hierarchical
setting of convolution layers and pooling layers have not been
disclosed by other References, nor do they belong to the common
knowledge in the field. In the prior art as a whole, there is no
inspiration to improve the above-mentioned Reference 1 to obtain
the technical solution of the invention patent application, and the
claimed invention technical solution is inventive.
2.3 "Drafting of the Description"
The following content has been added:
If an invention involves the construction or training of an artificial intelligence model, it is generally necessary to clearly record in the description the necessary modules, layers, or connection relationships of the model, as well as specific steps and parameters, etc. required for training; if an invention involves applying an artificial intelligence model or algorithm in a specific field or scenario, it is generally necessary to clearly record in the description how the model or algorithm is combined with the specific field or scenario, how the input and output data of the algorithm or model are set to demonstrate their inherent correlation, thereby enabling a person skilled in the art to implement the solution of the invention based on the content recorded in the description.
2.4 Addition of Examination Examples in "Drafting of the Description and Claims"
New examples are as follows:
[Example 20]
A Method for Generating Facial Features
Summary of the Application Content
The invention patent application enables information sharing among
second convolutional neural networks by using a set of feature
region images generated by a first convolutional neural network
provided with a spatial transformation network, which reduces
memory resource usage while improving the accuracy of facial image
generation results.
Claim of the Application
A method for generating facial features, comprising:
acquiring a facial image to be recognized;
inputting the facial image to be recognized into a first
convolutional neural network to generate a set of feature region
images of the facial image to be recognized, wherein the first
convolutional neural network is used to extract feature region
images from the facial image;
inputting each feature region image in the set of feature region
images into a corresponding second convolutional neural network to
generate regional facial features for that feature region image,
wherein the second convolutional neural network is used to extract
regional facial features from the corresponding feature region
image;
generating a set of facial features for the facial image to be
recognized based on the regional facial features of each feature
region image in the set of feature region images;
wherein, the first convolutional neural network is further
provided with a spatial transformation network for determining the
feature regions of the facial image; and
inputting the facial image to be recognized into the first
convolutional neural network to generate the set of feature region
images of the facial image to be recognized comprises: inputting
the facial image to be recognized into the spatial transformation
network, determining the feature regions of the facial image to be
recognized; inputting the facial image to be recognized into the
first convolutional neural network and, based on the determined
feature regions, generating the set of feature region images of the
facial image to be recognized.
Relevant Paragraphs from the Description
The method for generating facial features provided in the
embodiments of the present application first inputs the acquired
facial image to be recognized into the first convolutional neural
network to generate a set of feature region images of the facial
image to be recognized. The first convolutional neural network is
used to extract feature region images from the facial image. Then,
each feature region image in the set of feature region images is
input into a corresponding second convolutional neural network to
generate regional facial features for that feature region image.
The second convolutional neural network is used to extract regional
facial features from the corresponding feature region image.
Subsequently, based on the regional facial features of each feature
region image in the set of feature region images, a set of facial
features for the facial image to be recognized is generated. In
other words, the set of feature region images generated by the
first convolutional neural network enables information sharing
among the second convolutional neural networks. This reduces data
volume, thereby reducing memory resource usage and improving
generation efficiency.
To enhance the accuracy of the generation results, the first
convolutional neural network may also include a spatial
transformation network for determining the feature regions of the
facial image. In this case, an electronic device can input the
facial image to be recognized into the spatial transformation
network to determine the feature regions of the facial image to be
recognized. Thus, for the input face image to be recognized, the
first convolutional neural network can then extract images matching
the feature regions from the feature layer based on the feature
regions determined by the spatial transformation network, thereby
generating the set of feature region images of the facial image to
be recognized. The specific placement position of the spatial
transformation network within the first convolutional neural
network is not limited in the present application. The spatial
transformation network can continuously learn to determine the
feature regions of different features in different facial
images.
Analysis and Conclusion
This invention patent application seeks to protect a method for
generating facial features. To improve the accuracy of facial image
generation results, the first convolutional neural network may
include a spatial transformation network for determining the
feature regions of the facial image. However, the description does
not specify the exact placement position of the spatial
transformation network within the first convolutional neural
network.
Those skilled in the art understand that the spatial
transformation network, as a whole, can be inserted at any position
within the first convolutional neural network to form a nested
convolutional neural network structure. For example, the spatial
transformation network may serve as the first layer of the first
convolutional neural network or as an intermediate layer. Its
placement position does not affect its ability to identify feature
regions of the image. Through training, the spatial transformation
network can determine the feature regions of different features in
different facial images. Thus, the spatial transformation network
not only guides the first convolutional neural network in feature
region segmentation but also performs simple spatial
transformations on the input data to enhance the processing
effectiveness of the first convolutional neural network. Therefore,
the model according to the patent application has a clear
hierarchical structure, with well-defined inputs/outputs and
relationships between the layers. Wherein, both convolutional
neural networks and spatial transformation networks are well-known
algorithms, and those skilled in the art can construct the
corresponding model architecture based on the above description.
Accordingly, the solution for which protection is sought in the
invention patent application has been sufficiently disclosed in the
description and complies with the provisions of Article 26(3) of
the Patent Law.
[Example 21]
A Method for Predicting Cancer Based on Biological
Information
Summary of the Application Content
The invention patent application provides a method for predicting
cancer based on biological information. By using a trained enhanced
malignancy screening model, routine blood test indicators, blood
biochemical test indicators, and facial image features are jointly
used as inputs to the screening model to obtain a malignancy risk
prediction value. This solves the technical problem of improving
the accuracy of malignancy prediction.
Claim of the Application
A method for predicting cancer based on biological information,
characterized by comprising:
acquiring the routine blood test report and blood biochemical test
report of the subject to be screened, and identifying the test
indicators, age, and gender from the routine blood and blood
biochemical test reports;
acquiring a bare-faced frontal facial image of the subject to be
screened and extracting facial image features;
predicting the malignancy risk value for the corresponding subject
to be screened based on an enhanced malignancy screening
model;
wherein, the training process of the enhanced malignancy screening
model is: constructing a large-scale population sample set, wherein
the samples include routine blood test data, blood biochemical test
data, and facial images of the same individual; using the routine
blood test data, blood biochemical test data, and facial image
features to create learning samples; training a machine learning
algorithm model using the learning samples to obtain the enhanced
malignancy screening model.
Relevant Paragraphs from the Description
Currently, when tumor markers are used to identify malignancies, a
tumor marker value above the threshold cannot definitively confirm
malignancy, nor can a value below the threshold rule out
malignancy. Predicting cancer based on tumor markers alone lacks
high accuracy. The present application utilizes routine blood test
indicators, blood biochemical test indicators, and facial image
features to improve the identification accuracy of various
malignancies. The present application, while utilizing blood test
data, also references the health status of the subject as reflected
in facial images, enabling more accurate prediction of the
probability of malignancy. Wherein, the selection of computational
features for the enhanced malignancy screening model may include
some or all indicators from routine blood test data and blood
biochemical test data.
Analysis and Conclusion
The technical problem to be solved by this invention patent
application is how to improve the accuracy of malignancy
prediction. To solve this problem, the solution adopts a trained
enhanced malignancy screening model, with routine blood test
indicators, blood biochemical test indicators, and facial image
features jointly serving as inputs to the screening model, to
obtain a malignancy risk prediction value. However, routine blood
tests and blood biochemical tests each include dozens of test
indicators. The description does not specify which specific
indicators are key to tumor prediction accuracy, nor whether all
indicators are referenced or different weights are assigned to each
indicator for prediction. Those skilled in the art cannot determine
which indicators can be used to identify malignancies. Furthermore,
based on current scientific research, aside from a few types of
tumors such as facial skin cancer, it remains uncertain whether
there is any association between facial features and the
development of malignancies. The description does not record or
demonstrate a causal relationship between the "factors used
for judgment" and the "results of the judgment."
Additionally, the description provides no validation data to prove
that the accuracy of identifying various malignancies using this
solution is higher than that achieved using tumor markers, or
significantly higher than the accuracy level of random malignancy
probability judgment. Based solely on the content disclosed in the
description, those skilled in the art cannot confirm that the
solution of this application can solve the intended technical
problem. Therefore, the technical solution for which protection is
sought in this patent application has not been sufficiently
disclosed in the description, and the description does not comply
with the provisions of Article 26(3) of the Patent Law.
3. Addition of Provisions on Examination of Invention Patent Applications Involving Bitstreams
New content is as follows:
In application fields such as streaming media, communication systems, and computer systems, various types of data are generally generated, stored, and transmitted in the form of bitstreams. This section aims, in accordance with the Patent Law and its implementing rules, to set out specific provisions for examining the patentable subject matter of invention applications involving bitstreams, and for drafting the description and the claims.
7.1 Examination of Patentable Subject Matters
7.1.1 Examination under Item (2), Paragraph 1, Article 25 of the
Patent Law
If the subject matter of a claim relates solely to a pure bitstream, the claim falls under rules and methods of mental activities as stipulated in Item (2), Paragraph 1, Article 25 of the Patent Law and does not belong to patentable subject matter. For example, "A bitstream, characterized in that it comprises syntax element A, syntax element B, ...".
If, apart from its title, every limitation of a claim relates only to a pure bitstream, the claim falls under rules and methods of mental activities as stipulated in Item (2), Paragraph 1, Article 25 and does not belong to patentable subject matter. For example: "A method of generating a bitstream, characterized in that the bitstream comprises syntax element A, syntax element B, ..."
7.1.2 Examination under Paragraph 2 of Article 2 of the Patent Law
In the technical field of digital video coding/decoding, video data is usually encoded into a bitstream by a video encoding method, and the bitstream is decoded back into video data by a video decoding method. If a particular video encoding method that generates the bitstream constitutes a technical solution under Paragraph 2 of Article 2 of the Patent Law, then a method of storing or transmitting the bitstream as well as a computer-readable storage medium storing the bitstream as limited by that particular encoding method, are capable of achieving optimized allocation of storage or transmission resources, etc. Consequently, such storage or transmission methods and computer-readable storage medium as limited by that particular video encoding method constitute a technical solution under Paragraph 2 of Article 2 and belong to patentable subject matter.
7.2 Drafting of the Description and the Claims
7.2.1 Drafting of the Description
For an invention application involving a bitstream generated by a particular video encoding method, the description shall set forth the particular encoding method in a clear and complete manner to such an extent that a person skilled in the art can carry it out. Where the claimed subject matter relates to a method of storing or transmitting the bitstream, or a computer-readable storage medium storing the bitstream, the description shall also contain corresponding disclosures to support the claims.
7.2.2 Drafting of the Claims
For an invention application involving a bitstream generated by a particular video encoding method, claims can be drafted as a storage method, a transmission method, or a computer-readable storage medium. Such claims should generally be based on a claim directed to the particular video encoding method that generates the bitstream, be drafted either by referring to that method claim or by incorporating all features of that method claim.
[Example 1]
An invention application relating to video coding/decoding
technology may have its claims drafted as follows:
1. A video encoding method, characterized by comprising the
following steps:
a frame partitioning step, ...
...
an entropy encoding step, ....
2. A video encoding apparatus, characterized by comprising the
following units:
a frame partitioning unit, ...
...
an entropy encoding unit, ....
3. A video decoding method, characterized by comprising the
following steps:
an entropy decoding step, ...
...
a frame outputting step, ....
4. A video decoding apparatus, characterized by comprising the
following units:
an entropy-decoding unit, ...
...
a frame-outputting unit, ....
5. A method of storing a bitstream, characterized by: performing
the video encoding method of claim 1 to generate the bitstream; and
storing the bitstream.
6. A method of transmitting a bitstream, characteriszd by:
performing the video encoding method of claim 1 to generate the
bitstream; and transmitting the bitstream.
7. A computer-readable storage medium having stored thereon a
computer program/instruction and a bitstream, characterized in
that, when the computer program/instruction is executed by a
processor, the video encoding method of claim 1 is carried out to
generate the bitstream.
4. Provisions on fees for sequence listings
A new provision is added: "For computer-readable sequence listings submitted in the prescribed format, the page count shall not be included." Provisions regarding fee calculation for sequence listings exceeding 400 pages—"If the nucleotide and/or amino acid sequence listing, as a separate part of the description, exceeds 400 pages, the sequence listing shall be calculated as 400 pages"—have been deleted.
5. Patent Term Compensation
A new circumstance of reasonable delay during the granting process is added: "Reexamination proceedings where a rejection decision is revoked based on new reasons stated or new evidence submitted by the reexamination petitioner." Delays caused by this circumstance shall not be eligible for patent term compensation.
6. Non-Contributory Features Do Not Establish Inventiveness
A new provision is added: "Features that do not contribute to solving a technical problem, even if written in the claims, typically will not have an impact on the inventiveness of the technical solution."
A corresponding example is added:
[Example]
An invention related to a camera, aiming to address the technical
problem of achieving more flexible shutter control, which is
realized by improving the relevant mechanical and circuit
structures inside the camera. After the Examiner pointed out that
the claims lacked inventiveness, the applicant added features such
as the shape of the camera housing, the size of the display screen,
and the location of the battery compartment to the claims. The
description did not indicate any association between the newly
added features in the claims and the solution to the stated
technical problem. These newly added features were either
conventional components implicitly contained in the claimed subject
matter itself or obtainable by those skilled in the art based on
their common technical knowledge and routine experimental means.
The applicant also did not provide evidence or sufficient reasons
to demonstrate that these technical features could bring about any
further technical effects to the claimed technical solution.
Therefore, the aforementioned technical features do not contribute
to the solution to the stated technical problem and thus do not
impart inventiveness to the claimed technical
solution.
7. Amended Texts in Invalidation Proceedings
A new provision is added: Where multiple amended texts submitted by a patentee in the same invalidation proceedings all satisfy relevant amendment requirements, the last submitted amended text shall prevail, and other amended texts shall not serve as the examination basis.
8. Plant Varieties and Patentable Subject matters
Item (1), Paragraph 1, Article 25, of the Patent Law provides that scientific discoveries are not patentable, and Item (4) provides that animal and plant varieties are not patentable.
This revision to the Guidelines adds a definition of plant variety: "Plant varieties as referred to in the Patent Law mean plant groups that have been artificially bred or discovered and improved, have consistent morphological characteristics and biological properties, and have relatively stable genetic traits."
Additionally, it adds: "Wild plants found in nature that have not undergone technical treatment and exist naturally fall under scientific discoveries as stipulated in Item (1), Paragraph 1, Article 25 of the Patent Law and cannot be granted patents. However, when wild plants have been artificially bred or improved and have industrial utilization value, the plants themselves do not fall under the category of scientific discoveries." It also adds: "Plants and their reproductive materials obtained through artificial breeding or improvement of discovered wild plants, if they do not have consistent morphological characteristics and biological properties or relatively stable genetic traits in their populations, cannot be considered 'plant varieties' and therefore do not fall under the category stipulated in Item (1), Paragraph 1, Article 25 of the Patent Law."
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.