Patenting AI Inventions in India – What to disclose and how much to disclose?
- Gaurav Chhibber & Shashank Panthri

- Sep 19
- 5 min read
Innovations in artificial intelligence (AI), machine learning (ML), deep learning (DL), blockchain, quantum computing, and related emerging technologies are reshaping products and services across different sectors like medicine, finance, education and more. The modern AI systems combine software, statistical methods, large datasets and domain expertise, so a short, high-level description often leaves readers guessing. The Indian Patents Act, 1970 (hereinafter referred to as the Act) thus requires a constructive exchange: they ask applicants to share technical knowledge in exchange for exclusive rights. That trade-off works only if the patent document teaches a person skilled in the art in the relevant field to understand and replicate the invention.
This requirement of “sufficiency of disclosure” is laid down in Section 10(4) of the Act and further detailed in the Guidelines for Examination of Computer Related Inventions (CRIs), 2025 (hereinafter referred to as the CRI Guidelines 2025), which emphasize that an applicant must set out both “what” the invention is and “how” to perform it. The specification must therefore describe the invention fully and particularly to satisfy the “what” requirement i.e., provide illustrative drawings for the hardware-based inventions like apparatus/system/device and provide necessary sequence of steps along with flowcharts for method-based inventions. The specification must also disclose the best method known to the applicant to satisfy the “how” requirement.
In AI/ML/DL related inventions, sometimes the prospective use-case scenarios are extrapolated and camouflaged as a solution to certain problems without necessary specific details. The disclosure of these specific details is critical and should be disclosed fully and particularly with regard to the aspect of the claimed invention. For instance, in AI inventions, this typically means supplying concrete architecture descriptions, datasets used, training details, preprocessing steps, hyperparameter choices, and validation evidence i.e., technical detail that enables reproduction of the invention without undue experimentation by a person skilled in the art rather than marketing language or vague labels such as “uses deep learning”.
The person skilled in the art (often abbreviated PSITA) is the hypothetical practitioner used to assess whether a disclosure is enabling. In the matter of Caleb Suresh Motupalli v. Controller of Patents (2025), the Hon’ble Madras High Court defined a PSITA as a single technical expert or a team consisting of multiple experts depending on the nature of the invention. For example, a software engineer versed in model design and deployment, a data scientist experienced in dataset construction and annotation, and a domain specialist (e.g., a radiologist for medical AI) who understands the application context and evaluation criteria. Recognizing PSITA as a team matters because enablement must answer the information needs of each role: the specification should therefore include role-specific implementation steps, dataset provenance and annotation protocols, and evaluation procedures so that the combined skilled practitioners can reproduce and use the invention.
The reason AI needs more detailed disclosure is practical. AI or ML or DL systems are not just code: their performance often depends on the model architecture (layer types, depth, activation functions), the training data (size, labeling method, preprocessing), the training procedure (loss functions and hyperparameters), and how the system has been validated for robustness. A short, illustrative point saying “trained on a large dataset” or “uses a neural network” does not tell a PSITA or PSITA-team how to reproduce a claimed technical effect.
In Caleb Suresh Motupalli v. Controller of Patents (2025), the Court ruled that the patent's disclosures, lacking sufficient details on integration techniques (like “Object Oriented Analysis and Design”) and conventional information-processing and user-interface design techniques, failed to enable the claimed invention. The Court stated that the specification did not provide adequate technical criteria for the claims to work, thus failing the enablement test under Section 10(4)(a) of the Act. The Court stressed that enablement shouldn't require “undue experimentation”, noting the specification failed to demonstrate how the techniques achieved persona-augmentation and extension.
The CRI guidelines 2025 offer practical examples to clarify the sufficiency requirement for AI/ML/DL based inventions. For instance, a failure-prediction system discloses the use of recurrent neural networks (LSTM) trained on historical sensor data such as temperature, vibration and speed. The example shows a three-layer LSTM with dropout and batch normalization, uses an Adam optimizer with learning-rate scheduling and binary cross-entropy loss, and reports high prediction accuracy in cross-validation. For sufficiency of disclosure, the applicant must explicitly identify the training dataset, its sensor types and sampling frequency, the way time-series features are processed (stationarity checks, seasonality handling), the LSTM architectural parameters (number of layers, time window, dropout rates) and the training/validation procedure. Without these specifics a PSITA would struggle to reproduce the failure-prediction capability.
Another instance emphasizes the pivotal role of data pre-processing in the invention. A remote-sensing crop-classification describes preprocessing before classification of the crops i.e. applying atmospheric correction (Sen2Cor), computing NDVI, performing image segmentation based on texture and vegetation indices, and then classifying processed tiles with an EfficientNet CNN trained on labelled plots. For efficiency of disclosure, the applicant should narrate the pre-processing pipeline step-by-step, explain why those steps materially boost classification performance, and provide comparative benchmarks showing the performance gain over raw imagery. This demonstrates that when preprocessing materially contributes to the technical result, the disclosure must make that contribution explicit.
A further instance centers on the invention’s foundation in reinforcement-learning, for example, for urban traffic control in which a Deep Q-Network (DQN) processes camera feeds, defines states (vehicle density, queue lengths), and selects actions (signal-timing adjustments). The example underscores that when reinforcement learning is claimed, the specification should define how the agent perceives the environment, the state and action spaces, reward/advantage functions, the simulation environment used for training, and the transfer-learning strategy for real-world deployment. In short, it is not enough to state high-level aims like ‘reduce congestion’, the specification must also disclose the concrete control and interaction details that make such outcomes possible.
In another instance, the invention’s inventiveness depends on specific traits of the training dataset. The invention discloses facial-recognition accuracy for elderly individuals: the example explains that training on a dataset specifically curated for ages 65–90 (with diverse lighting, poses, occlusions) and using a modified FaceNet with an age-aware embedding loss can materially improve performance for that subgroup. For sufficiency of disclosure, the specification must describe the dataset traits, why generic datasets would not suffice, the architectural changes (and their rationale), and quantitative comparisons showing the improvement. This kind of example shows that sometimes the novelty lies in which data you used and why it matters and that data descriptions therefore count as core disclosure.
The burgeoning fields of AI, ML, and DL are gaining significant traction globally, including in India. This surge of interest has led innovators, from startups to multinational corporations and even educational institutions, to intensify their research and development efforts in these emerging technologies. Consequently, a multitude of patent applications are being filed to protect these AI related innovations. One of the essential requirements for patenting AI inventions is sufficiency of disclosure, which requires the applicant to set out both “what” the invention is by describing the invention fully and particularly, and “how” to perform it by disclosing the best method known to the applicant. This results in the disclosure of the technical details that enable a person skilled in the art to reproduce the technical effect of the invention without undue experimentation.

Gaurav Chhibber
Partner

Shashank Panthri
Associate
































Comments