Learnaboutgmp Community

Method validation and data security

For laboratory computerized systems our procedures and practices are
informed primarily by the “GAMP Good Practice Guide: Validation of
Laboratory Computerized Systems”, however I have a particular
question concerning systems we purchase for which we have no methods
developed. Our proposal is that after we have assessed, selected,
and purchased a new laboratory instrument (typically falling
somewhere in categories D-G per the GAMP guide for purposes of my
question), we will do the following:

(1) Tag it “not in service” per procedures

(2) Approve and perform vendor protocols to test basic instrument
functionality.

(3) Have QC explore the instrument functionality, learn the system,
develop a method, and perform method validation.

(4) Submit to FDA for approval to use new method.

(5) Permission to use new method granted.

(6) Open a change control to initiate a validation project (DQ, IQ,
OQ, and PQ) of the instrument for the method developed.

(7) QA will remove “not in service” tag once change is approved to
implement per change control and system will be released for GMP use
in production.

My question is it necessary in terms of data integrity/security to
conduct/repeat method validation after DQ,IQ, and OQ, such that it is
part of or constitutes the PQ and then submit it to the agency? Or
is acceptable to submit it to the agency with the vendor protocols
alone supporting legitimacy of the instrument being sufficient, prior
to embarking on validation of the system for its intended use?

I’m sorry, but what you’ve described seems to be a scenario of doing regulated work on a non-regulated system. i don’t look at validation the way you are looking at. for me, a system should be validated and released to show that it performs as dictated in the URS. this includes ensuring the software performs as intended, and the hardware components (if applicable) perform as intended within their established criteria. once released, a method validation is mearly using the system within it’s tested boundries. i don’t agree you need to have a method to validate a system.

an example: lets say you bought a system. you have softawre requirements (i.e. system must audit trail changes to system policies, system must allow authorized users to create methods, etc) and hardware requirements (i.e. detector X must be capable of reading wavelenghts between 100-800nm).

i can validate the system, and qualify that the detector works at those limits (using reference standards, usually by the vendor) and release the system. once the system is released, any method i develop better be using the detector between 100-800 nm, not 850nm otherwise you didn’t validate the system for it’s intended use. you’ve proven the detector works within those limits, a method validation isn’t there to ensure your system works properly, it’s to prove your method works.

the point i’m trying to make is the base layer is to validate the system, then validate your methods on top of that layer (in other words, now that you have assurance that the system works at it’s ‘broad’ limits, any method parameters you run you can’t blame the system for failure, but only the method). how do you validate a method on a system that hasn’t been tested? i’d question the data right off the bat.