Model performance analysis for the second round of models

More than 2/3rds of models have pearson counts performance > 0.6

Model performance is even higher when calculated on both the peak and non peak regions

This performance drops a lot when no bias is provided during prediction

jsd is generally quite high

Auprc is really good for most models. With more than 90% having auprc >0.7

AUROC is also really good for most models. With more than 90% having auprc >0.9

Both AUPRC and AUROC drop when the bias is set to zero during prediction as we saw during with the pearson

jsd and pearson both depend on the number of peaks in the experimental dataset

Pearson that we get without the bias depends even more strongly on the number of peaks in the experimental dataset

When the correlation between observed and control is high that we get low performance when bias is not provided during prediction

looking at a subset of experiments with <10000 peaks. The correlation that we get between observed and predicted is a lot dependent on the correlation between WCE and ChIP