i would like to hear some comments from my colleagues on this subject…
In my short experience some general reasons for deviations and discrepancies are improper planning (e.g. not having enough components) and improper training (e.g. people performing task fail to understand the correct procedure to perform their job).
I would love to hear other’s input on this topic too.
6 Lacks in Systems
- Lack of Organization
- Lack of training
- Lack of discipline
- Lack of resources
- Lack of time
- Lack of management support
Should the person writing an IQ execute his own IQ?
I don’t see why that would be a problem as long as the IQ was reviewed and approved before the execution.
I see most in the IQ and OQ stages. I think there are a couple of reasons for this - which may be particular to my company?
The person writing the test and acceptance criteria is usually an engineer who would prefer to be out on the floor not stuck with the computer, so they rush the task. They usually know the kit inside out (or think they do!) so write something that they could perform, but would baffle anyone else. Typically they also know what a pass or fail would be so they don’t write it down.
Additionally, any checks that their validation document matches up with other documents (eg product specifications) doesn’t happen, so any acceptance criteria which are present are probably wrong.
On top of that, while their technical knowledge is excellent, their ability to write is usually not - so spelling, grammar and punctuation leave a lot to be desired. Sometimes this really matters too - compare the meaning of “PROSCRIBED” and “PRESCRIBED”. Only 1 letter difference, but all the difference in the world in meaning.
By the way - I am an engineer!
Most common I’ve experienced are acceptance criteria which are too tight, poorly elucidated test scripts, failure to understand test scripts, general equipment issues (poor construction, component failures, badly specced materials, etc)
Some of the most common causes include:
- Engineers executing text scripts without a test run being performed
- Insufficient knowledge of what is being validated
- Scripts been rushed to meet timelines
- Executers not reading the test instuctions correctly
- Test instructions that are badly written
- Pre-requisite tests not performed correctly
- Badly wriiten test scripts
- Tight criteria
- Impossible criteria
I feel alot of this can be avoided by performing a dry run first.