These utilities were developed as additional tools to help in the evaluation of the validity and precision of different pooling strategies for fixed pool sizes. Three different options are provided depending on assumptions about test sensitivity and specificity. The three options are:
These programs each use two separate pairs of values for sensitivity and specificity. The first pair of values are the values used in estimating true prevalence from the simulated testing results (for the perfect test option estimated sensitivity and specificity are both assumed to be 100% and cannot be entered in the input screen). The second pair of values ('True test sensitivity/specificity') are used to determine the actual results of testing during simulation. By specifying different values for the true test sensitivity (specificity) and estimated sensitivity (specificity) it is possible to evaluate the importance of potential errors in the assumed values used. For example, if prevalence is estimated assuming that the test is perfect (both sensitivity and specificity are 100%) but in fact the true sensitivity is say 80%, the true prevalence would be substantially underestimated, resulting in a biased estimate. This model allows estimation of the magnitude of this bias.
All three methods simulate sampling and prevalence estimation for up to 6 different pooling strategies for assumed values of prevalence and test sensitivity and specificity and for a specified level of confidence, assuming that a fixed pool size is used. The program runs multiple iterations of sampling and estimation and calculates the mean prevalence, confidence interval width and estimated bias across all iterations. By simulating alternative pooling strategies this utility allows the various strategies to be evaluated and compared to determine the optimum strategy that will give the desired level of precision in the prevalence estimate and also minimise the level of bias in the estimate.
For each pooling strategy, the program simulates sampling, pooling and testing of individuals from an infinite population with the specified prevalence, using a test of the specified true sensitivity and specificity. Sampling and testing is repeated for the specified number of iterations for each strategy and the prevalence, confidence interval width and variance are estimated for each iteration using the selected method and assumed values for sensitivity and specificity. The mean prevalence, bias, confidence interval width and variance are calculated across all iterations for each strategy, where mean bias is the mean prevalence estimate less the true (design) prevalence for the population. Mean square error (mean variance plus square of mean bias) is also calculated, and the magnitude of the mean bias is also calculated as proportions of the mean estimated prevalence, the true (design) prevalence and the mean square error.
Outputs for each method are summarised across all iterations for each strategy entered and presented in a summary
table. The main outputs are:
For fixed pool sizes and perfect tests or tests of known sensitivity and specificity, exact binomial confidence limits are used. For fixed pool sizes and tests of uncertain sensitivity and specificity, asymptotic confidence limits are used. For fixed pool sizes and tests of known sensitivity and specificity, the width of the simulated (exact) confidence intervals may be substantially wider than the corresponding asymptotic confidence intervals. Therefore, sample sizes calculated using asymptotic methods for known sensitivity and specificity may be inadequate to give the desired precision if exact confidence limits are calculated, and may need to be increased if the desired precision is to be achieved.
It is important to enter pool sizes and associated numbers of pools tested from the top of the table. You must enter at least one row of valid values, and any rows entered must be complete. All values must be positive integers. Any row in the input table that includes an invalid value will be ignored, as will any subsequent rows.
|3||Bayesian vs Frequentist methods|
|4||Fixed pool size and perfect tests|
|5||Fixed pool size and known Se & Sp|
|6||Fixed pool size and uncertain Se & Sp|
|7||Variable pool size and perfect tests|
|8||Pooled prevalence using a Gibbs sampler|
|9||True prevalence using one test|
|10||Estimated true prevalence using two tests with a Gibbs sampler|
|11||Estimation of parameters for prior Beta distributions|
|12||Sample size for fixed pool size and perfect test|
|13||Sample size for fixed pool size and known test sensitivity and specificity|
|14||Sample size for fixed pool size and uncertain test sensitivity and specificity|
|15||Simulate sampling for fixed pool size|
|16||Simulate sampling for variable pool sizes|
|18||Pooled prevalence estimates are biased!|