Goal

  • develop an activity model predicting:
    • H3K27ac
    • Pol II
    • GRO-seq

metaplots html

Design descisions

  • dataset = all the chipnexus peaks
  • output = total number of counts in the +- 1kb of the region center

Tasks

  • [x] write the dataloader
  • [x] load all the data
  • [x] plot the response variable distribution (histogram, QQ-plots)
  • [x] get the predictions across all the regions from bpnet
    • [x] save to the hdf5 file
  • [x] train a simple model on top
  • [x] evaluate using simple scatterplots
  • [ ] exclude promoter-regions and re-train

Other tasks

  • [ ] fine-tune the whole model
  • [ ] get the importance scores
In [1]:
from basepair.imports import *
import pybedtools
from genomelake.extractors import BigwigExtractor
from basepair.extractors import StrandedBigWigExtractor, bw_extract
from kipoiseq.transforms import ResizeInterval
from basepair.modisco.table import ModiscoData
from pybedtools import BedTool
from basepair.cli.modisco import load_ranges
from basepair.plot.heatmaps import (heatmap_importance_profile, normalize, multiple_heatmap_stranded_profile,
                                    heatmap_stranded_profile)
from basepair.plot.profiles import plot_stranded_profile
import hvplot.pandas
from basepair.plots import regression_eval
Using TensorFlow backend.
In [2]:
create_tf_session(0)
Out[2]:
<tensorflow.python.client.session.Session at 0x7fb03318c4a8>

Load the data

In [3]:
df = pd.read_csv(f"{ddir}/processed/chipnexus/external-data.tsv", sep='\t')
df = df.set_index('assay')
In [4]:
df
Out[4]:
axis path
assay
DNase 0 /srv/scratch/avsec/wo...
DNase 0 /srv/scratch/avsec/wo...
DNase-HINT 0 /srv/scratch/avsec/wo...
... ... ...
Groseq 0 /srv/scratch/avsec/wo...
MNase-wt 0 /srv/scratch/avsec/wo...
MNase-h4 0 /srv/scratch/avsec/wo...

16 rows × 2 columns

Write the bed file

In [5]:
from basepair.cli.schemas import DataSpec

ds = DataSpec.load(f"{ddir}/processed/chipnexus/exp/models/oct-sox-nanog-klf/dataspec.yml")

!mkdir -p {ddir}/processed/activity/data/

def read_factor(factor, filename):
    df = pd.read_table(filename, header=None, usecols=[0, 1, 2])
    df[3] = factor
    df.columns = ['chrom', 'start', 'end', 'name']
    return df

dfc = pd.concat([read_factor(k, ds.task_specs[k].peaks) for k in ds.task_specs], axis=0)

dfc.to_csv(f"{ddir}/processed/activity/data/peaks.bed", sep='\t', index=False, header=False)
In [6]:
!head -n 2 {ddir}/processed/activity/data/peaks.bed
chrX	143483044	143483045	Oct4
chr3	122145563	122145564	Oct4

Prepare the bigwigs

In [7]:
assays = ['H3K27ac', 'PolII', 'Groseq']
In [8]:
def tolist(s):
    if isinstance(s, str):
        return [s]
    else:
        return list(s)
In [9]:
bigwigs = {a: tolist(df.loc[a].path) for a in assays}
In [10]:
bigwigs
Out[10]:
{'H3K27ac': ['/srv/scratch/avsec/workspace/chipnexus/data/raw/2018-10-13-histone-chipseq-PMID-28483418/H3K27ac_ChIP-seq_WT_rep1_blacklisted.bw',
  '/srv/scratch/avsec/workspace/chipnexus/data/raw/2018-10-13-histone-chipseq-PMID-28483418/H3K27ac_ChIP-seq_WT_rep2_blacklisted.bw'],
 'PolII': ['/srv/scratch/avsec/workspace/chipnexus/data/raw/2018-10-13-histone-chipseq-PMID-28483418/PolII_ChIP-seq_WT_rep1_blacklisted.bw',
  '/srv/scratch/avsec/workspace/chipnexus/data/raw/2018-10-13-histone-chipseq-PMID-28483418/PolII_ChIP-seq_WT_rep2_blacklisted.bw'],
 'Groseq': ['/srv/scratch/avsec/workspace/chipnexus/data/raw/2018-10-15-groseq-from-melanie/GRO-seq_WT_1_blacklisted.bw']}

Write the dataloader

In [23]:
from basepair.config import valid_chr, test_chr

from basepair.datasets import *
In [53]:
ds.fasta_file
Out[53]:
'/mnt/data/pipeline_genome_data/mm10/mm10_no_alt_analysis_set_ENCODE.fasta'
In [30]:
dl = ActivityDataset(f"{ddir}/processed/activity/data/peaks.bed", ds.fasta_file, bigwigs, 
                     excl_chromosomes=valid_chr + test_chr)

# load all
train = dl.load_all(num_workers=10)

dfy = pd.DataFrame(train['targets'])

valid = ActivityDataset(f"{ddir}/processed/activity/data/peaks.bed", ds.fasta_file, bigwigs, 
                        incl_chromosomes=valid_chr).load_all(num_workers=10)

dfy_valid = pd.DataFrame(valid['targets'])

test = ActivityDataset(f"{ddir}/processed/activity/data/peaks.bed", ds.fasta_file, bigwigs, 
                       incl_chromosomes=test_chr).load_all(num_workers=10)

dfy_test = pd.DataFrame(test['targets'])
In [31]:
dfy.head()
Out[31]:
H3K27ac PolII Groseq
0 72178.0 230052.0 1577.0
1 440826.0 164524.0 320577.0
2 135326.0 83775.0 229044.0
3 44400.0 22817.0 77130.0
4 246205.0 89963.0 282236.0
In [20]:
len(dl)
Out[20]:
61205
In [21]:
dl[0]
Out[21]:
{'inputs': {'seq': array([[0., 0., 0., 1.],
         [0., 1., 0., 0.],
         [0., 0., 0., 1.],
         [0., 1., 0., 0.],
         [0., 1., 0., 0.],
         [0., 0., 0., 1.],
         [0., 1., 0., 0.],
         [0., 0., 0., 1.],
         [1., 0., 0., 0.],
         [0., 1., 0., 0.],
         [1., 0., 0., 0.],
         [1., 0., 0., 0.],
         [0., 0., 1., 0.],
         [0., 0., 0., 1.],
         [0., 0., 0., 1.],
         [0., 0., 0., 1.],
         [0., 1., 0., 0.],
         [1., 0., 0., 0.],
         [0., 0., 1., 0.],
         [0., 0., 0., 1.],
         ...,
         [0., 1., 0., 0.],
         [1., 0., 0., 0.],
         [1., 0., 0., 0.],
         [0., 1., 0., 0.],
         [1., 0., 0., 0.],
         [1., 0., 0., 0.],
         [0., 0., 0., 1.],
         [1., 0., 0., 0.],
         [0., 0., 0., 1.],
         [0., 0., 1., 0.],
         [1., 0., 0., 0.],
         [1., 0., 0., 0.],
         [0., 1., 0., 0.],
         [0., 0., 0., 1.],
         [1., 0., 0., 0.],
         [1., 0., 0., 0.],
         [0., 1., 0., 0.],
         [0., 1., 0., 0.],
         [1., 0., 0., 0.],
         [0., 0., 1., 0.]], dtype=float32)},
 'targets': {'H3K27ac': 72178.0, 'PolII': 230052.0, 'Groseq': 1577.0},
 'metadata': {'ranges': GenomicRanges(chr='chrX', start=143482545, end=143483545, id='0', strand='*'),
  'ranges_wide': GenomicRanges(chr='chrX', start=143482045, end=143484045, id='Oct4', strand='.'),
  'name': 'Oct4'}}

Output distribution

Histograms

QQ-plots

In [23]:
import scipy.stats as stats
In [24]:
fig, axes = plt.subplots(1, len(assays), figsize=(9, 3), sharex=True, sharey=True)
for a, ax in zip(assays, axes):
    stats.probplot(np.log10(1 + dfy[a]), dist="norm", plot=ax);
    ax.set_title(a)
plt.tight_layout()

Get bottleneck predictions

In [11]:
from basepair.BPNet import BPNetPredictor
from keras.models import Model, Sequential
import keras.layers as kl
In [12]:
dfe = pd.read_csv("https://docs.google.com/spreadsheets/d/1n3l2HXKSNpmNUOifD41uRzDEAgmOqXMQDxquRaz6WLg/export?gid=0&format=csv")
In [13]:
dfe = dfe[dfe.train == True]
In [14]:
model_exps = {
    #"nexus/binary.gw": 'nexus,gw,OSNK,1,0,0,FALSE,valid,0.5,64,25,0.001,9,FALSE',
    "nexus/profile.peaks.bias-corrected": 'nexus,peaks,OSNK,0,10,1,FALSE,same,0.5,64,25,0.004,9,FALSE,[1,50],TRUE',
    "nexus/profile.peaks.non-bias-corrected": 'nexus,peaks,OSNK,0,10,1,FALSE,same,0.5,64,25,0.004,9,FALSE-2',
    #"seq/binary.gw": 'seq,gw,OSN,1,0,0,FALSE,valid,0.5,64,50,0.001,9,FALSE/',
    "seq/profile.peaks.bias-corrected": 'seq,peaks,OSN,0,10,1,FALSE,same,0.5,64,50,0.004,9,FALSE,[1,50],TRUE',
    "seq/profile.peaks.non-bias-corrected": 'seq,peaks,OSN,0,10,1,FALSE,same,0.5,64,50,0.004,9,FALSE',
    'nexus/profile.peaks-union.bias-corrected': 'nexus,nexus-seq-union,OSN,0,10,1,FALSE,same,0.5,64,25,0.004,9,FALSE,[1,50],TRUE',
    'seq/profile.peaks-union.bias-corrected': '	seq,nexus-seq-union,OSN,0,10,1,FALSE,same,0.5,64,50,0.004,9,FALSE,[1,50],TRUE',
             }
In [15]:
ls ../../chipnexus/train/seqmodel
2.1-plot-hyper-params.ipynb                     log/
ChIP-nexus.dataspec.seq-nexus-intersection.yml  modisco.hp.yml
ChIP-nexus.dataspec.yml                         modisco-template.ipynb
ChIP-nexus-default.gin                          nexus_figs@
ChIP-nexus-default-gw.gin                       output@
ChIP-nexus-tasks.OSNK.tsv                       prepare-data.smk
ChIP-nexus-tasks.OSN.tsv                        problem-gw.gin
ChIP-seq.dataspec.seq-nexus-intersection.yml    problem-nexus-seq-union.gin
ChIP-seq.dataspec.yml                           problem-peaks.gin
ChIP-seq-default.gin                            README.md
ChIP-seq-tasks.OSN.tsv                          Snakefile
figures/                                        wandb-debug.log
joint-model-valid.gin
In [16]:
base_mdir = Path('/oak/stanford/groups/akundaje/avsec/basepair/data/processed/comparison/output')
In [17]:
mdir = base_mdir / model_exps['nexus/profile.peaks.bias-corrected']
In [18]:
from basepair.seqmodel import SeqModel
In [19]:
bpnet = SeqModel.from_mdir(mdir)
WARNING:tensorflow:From /users/avsec/bin/anaconda3/envs/chipnexus/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py:497: calling conv1d (from tensorflow.python.ops.nn_ops) with data_format=NHWC is deprecated and will be removed in a future version.
Instructions for updating:
`NHWC` for data_format is deprecated, use `NWC` instead
2019-03-11 05:07:34,044 [WARNING] From /users/avsec/bin/anaconda3/envs/chipnexus/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py:497: calling conv1d (from tensorflow.python.ops.nn_ops) with data_format=NHWC is deprecated and will be removed in a future version.
Instructions for updating:
`NHWC` for data_format is deprecated, use `NWC` instead
WARNING:tensorflow:From /users/avsec/bin/anaconda3/envs/chipnexus/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
2019-03-11 05:07:45,979 [WARNING] From /users/avsec/bin/anaconda3/envs/chipnexus/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
In [20]:
import tensorflow as tf
In [21]:
bottleneck_model = bpnet.bottleneck_model()
In [22]:
bottleneck_model.predict(np.ones((1, 1000, 4))).shape
Out[22]:
(1, 1000, 64)
In [34]:
bottleneck_predictions = bottleneck_model.predict(train['inputs'], batch_size=32, verbose=1)
61205/61205 [==============================] - 19s 302us/step
In [35]:
bottleneck_predictions_valid = bottleneck_model.predict(valid['inputs'], batch_size=32, verbose=1)
bottleneck_predictions_test = bottleneck_model.predict(test['inputs'], batch_size=32, verbose=1)
19137/19137 [==============================] - 6s 294us/step
18086/18086 [==============================] - 5s 300us/step
In [35]:
top_model = Sequential([
    kl.GlobalAvgPool1D(input_shape=(1000, 64)),
    kl.Dense(3)
])
In [36]:
top_model = Sequential([
    kl.MaxPool1D(pool_size=50, input_shape=(1000, 64)),
    kl.Flatten(),
    kl.Dense(64, activation='relu'),
    # kl.Dropout(.5)
    kl.Dense(3)
])
In [37]:
from concise.metrics import var_explained
from keras.callbacks import EarlyStopping
from sklearn.preprocessing import StandardScaler
In [14]:
mdir = '../../../src/chipnexus/train/seqmodel/output/nexus,peaks,OSNK,0,10,1,FALSE,same,0.5,64,25,0.004,9,FALSE,[1,50],TRUE/'
In [15]:
# mdir = '../../../src/chipnexus/train/seqmodel/output/nexus,gw,OSNK,1,0,0,FALSE,same,0.5,64,25,0.001,9,FALSE/'

# mdir = '../../../src/chipnexus/train/seqmodel/output/seq,peaks,OSN,0,10,1,FALSE,same,0.5,64,50,0.004,9,FALSE,[1,50],TRUE/'

# mdir = '../../../src/chipnexus/train/seqmodel/output/nexus,peaks,OSNK,0,10,1,FALSE,same,0.5,64,25,0.004,9,FALSE-2/'
In [38]:
top_model.compile("adam", 'mse', metrics=[var_explained])
In [39]:
preproc = StandardScaler()
In [40]:
y = preproc.fit_transform(np.log10(1 + dfy).values)
y_valid = preproc.transform(np.log10(1 + dfy_valid).values)
y_test = preproc.transform(np.log10(1 + dfy_test).values)
In [41]:
top_model.fit(bottleneck_predictions, y, batch_size=512,
              epochs=100,
              validation_data=(bottleneck_predictions_valid, y_valid),
              callbacks=[EarlyStopping(patience=5, restore_best_weights=True)]
             )
Train on 61205 samples, validate on 19137 samples
Epoch 1/100
61205/61205 [==============================] - 37s 597us/step - loss: 1.0427 - var_explained: -0.0075 - val_loss: 0.9008 - val_var_explained: 0.1081
Epoch 2/100
61205/61205 [==============================] - 35s 576us/step - loss: 0.8590 - var_explained: 0.1434 - val_loss: 0.8625 - val_var_explained: 0.1355
Epoch 3/100
61205/61205 [==============================] - 34s 562us/step - loss: 0.8388 - var_explained: 0.1660 - val_loss: 0.8510 - val_var_explained: 0.1471
Epoch 4/100
61205/61205 [==============================] - 32s 531us/step - loss: 0.8253 - var_explained: 0.1788 - val_loss: 0.8519 - val_var_explained: 0.1445
Epoch 5/100
61205/61205 [==============================] - 32s 521us/step - loss: 0.8199 - var_explained: 0.1868 - val_loss: 0.8399 - val_var_explained: 0.1588
Epoch 6/100
61205/61205 [==============================] - 34s 550us/step - loss: 0.8116 - var_explained: 0.1946 - val_loss: 0.8534 - val_var_explained: 0.1601
Epoch 7/100
61205/61205 [==============================] - 34s 550us/step - loss: 0.8062 - var_explained: 0.1992 - val_loss: 0.8327 - val_var_explained: 0.1649
Epoch 8/100
61205/61205 [==============================] - 34s 552us/step - loss: 0.8002 - var_explained: 0.2030 - val_loss: 0.8323 - val_var_explained: 0.1650
Epoch 9/100
61205/61205 [==============================] - 33s 547us/step - loss: 0.7961 - var_explained: 0.2070 - val_loss: 0.8405 - val_var_explained: 0.1646
Epoch 10/100
61205/61205 [==============================] - 34s 562us/step - loss: 0.7956 - var_explained: 0.2102 - val_loss: 0.8317 - val_var_explained: 0.1650
Epoch 11/100
61205/61205 [==============================] - 34s 555us/step - loss: 0.7895 - var_explained: 0.2130 - val_loss: 0.8711 - val_var_explained: 0.1704
Epoch 12/100
61205/61205 [==============================] - 33s 545us/step - loss: 0.7931 - var_explained: 0.2142 - val_loss: 0.8312 - val_var_explained: 0.1698
Epoch 13/100
61205/61205 [==============================] - 33s 542us/step - loss: 0.7902 - var_explained: 0.2170 - val_loss: 0.8378 - val_var_explained: 0.1672
Epoch 14/100
61205/61205 [==============================] - 34s 549us/step - loss: 0.7846 - var_explained: 0.2193 - val_loss: 0.8414 - val_var_explained: 0.1746
Epoch 15/100
61205/61205 [==============================] - 33s 542us/step - loss: 0.7862 - var_explained: 0.2203 - val_loss: 0.8253 - val_var_explained: 0.1761
Epoch 16/100
61205/61205 [==============================] - 33s 545us/step - loss: 0.7842 - var_explained: 0.2217 - val_loss: 0.8279 - val_var_explained: 0.1681
Epoch 17/100
61205/61205 [==============================] - 34s 561us/step - loss: 0.7792 - var_explained: 0.2238 - val_loss: 0.8271 - val_var_explained: 0.1716
Epoch 18/100
61205/61205 [==============================] - 34s 551us/step - loss: 0.7810 - var_explained: 0.2249 - val_loss: 0.8297 - val_var_explained: 0.1693
Epoch 19/100
61205/61205 [==============================] - 34s 559us/step - loss: 0.7792 - var_explained: 0.2259 - val_loss: 0.8286 - val_var_explained: 0.1690
Epoch 20/100
61205/61205 [==============================] - 34s 561us/step - loss: 0.7790 - var_explained: 0.2266 - val_loss: 0.8314 - val_var_explained: 0.1692
Out[41]:
<keras.callbacks.History at 0x7faef00c3b38>

Evaluate

In [42]:
ypred_valid = top_model.predict(bottleneck_predictions_valid)
In [43]:
ypred_valid.shape
Out[43]:
(19137, 3)
In [44]:
fig, axes = plt.subplots(len(assays), 1, figsize=(5, 11), sharex=True, sharey=True)
for i, (a, ax) in enumerate(zip(assays, axes)):
    regression_eval(ypred_valid[:,i], y_valid[:,i], alpha=0.05, task=a, ax=ax);
plt.tight_layout()
In [120]:
fig, axes = plt.subplots(len(assays), 1, figsize=(5, 11), sharex=True, sharey=True)
for i, (a, ax) in enumerate(zip(assays, axes)):
    regression_eval(ypred_valid[:,i], y_valid[:,i], alpha=0.05, task=a, ax=ax);
plt.tight_layout()

Fine-tune

In [45]:
whole_model = Sequential([bottleneck_model, top_model])
In [46]:
whole_model.compile("adam", 'mse', metrics=[var_explained])
In [47]:
whole_model.fit(train['inputs']['seq'], y, batch_size=512,
                epochs=100,
                validation_data=(valid['inputs']['seq'], y_valid),
                callbacks=[EarlyStopping(patience=5, restore_best_weights=True)])
Train on 61205 samples, validate on 19137 samples
Epoch 1/100
61205/61205 [==============================] - 58s 942us/step - loss: 0.8000 - var_explained: 0.2200 - val_loss: 0.8156 - val_var_explained: 0.1810
Epoch 2/100
61205/61205 [==============================] - 54s 876us/step - loss: 0.7387 - var_explained: 0.2672 - val_loss: 0.8186 - val_var_explained: 0.1848
Epoch 3/100
61205/61205 [==============================] - 54s 875us/step - loss: 0.7077 - var_explained: 0.3051 - val_loss: 0.8158 - val_var_explained: 0.1807
Epoch 4/100
61205/61205 [==============================] - 54s 884us/step - loss: 0.6664 - var_explained: 0.3412 - val_loss: 0.8300 - val_var_explained: 0.1600
Epoch 5/100
61205/61205 [==============================] - 54s 885us/step - loss: 0.6361 - var_explained: 0.3739 - val_loss: 0.8427 - val_var_explained: 0.1643
Epoch 6/100
61205/61205 [==============================] - 54s 888us/step - loss: 0.6092 - var_explained: 0.4047 - val_loss: 0.8451 - val_var_explained: 0.1434
Out[47]:
<keras.callbacks.History at 0x7fa8be8bd278>
In [48]:
ypred_valid = whole_model.predict(valid['inputs']['seq'])
In [ ]:
ypred_valid.shape
In [ ]:
fig, axes = plt.subplots(len(assays), 1, figsize=(5, 11), sharex=True, sharey=True)
for i, (a, ax) in enumerate(zip(assays, axes)):
    regression_eval(ypred_valid[:,i], y_valid[:,i], alpha=0.05, task=a, ax=ax);
plt.tight_layout()
In [130]:
fig, axes = plt.subplots(len(assays), 1, figsize=(5, 11), sharex=True, sharey=True)
for i, (a, ax) in enumerate(zip(assays, axes)):
    regression_eval(ypred_valid[:,i], y_valid[:,i], alpha=0.05, task=a, ax=ax);
plt.tight_layout()

Train from scratch

In [141]:
import keras.backend as K
from keras.models import load_model

def reset_weights(model):
    session = K.get_session()
    for layer in model.layers: 
        if hasattr(layer, 'kernel_initializer'):
            layer.kernel.initializer.run(session=session)
In [137]:
whole_model.save(f"{ddir}/processed/activity/models/dense/fine-tuned.h5")
In [145]:
reinitialized_model = load_model(f"{ddir}/processed/activity/models/dense/fine-tuned.h5")
In [163]:
reset_weights(reinitialized_model.layers[0])
reset_weights(reinitialized_model.layers[1])
In [165]:
reinitialized_model.compile("adam", 'mse', metrics=[var_explained])
In [166]:
reinitialized_model.fit(train['inputs']['seq'], y, batch_size=512,
                epochs=100,
                validation_data=(valid['inputs']['seq'], y_valid),
                callbacks=[EarlyStopping(patience=5)])
Train on 61205 samples, validate on 19137 samples
Epoch 1/100
61205/61205 [==============================] - 58s 942us/step - loss: 1.2780 - var_explained: -0.2762 - val_loss: 0.9018 - val_var_explained: 0.0977
Epoch 2/100
61205/61205 [==============================] - 55s 895us/step - loss: 0.8525 - var_explained: 0.1490 - val_loss: 0.8654 - val_var_explained: 0.1464
Epoch 3/100
61205/61205 [==============================] - 55s 898us/step - loss: 0.8386 - var_explained: 0.1638 - val_loss: 0.8479 - val_var_explained: 0.1573
Epoch 4/100
61205/61205 [==============================] - 55s 897us/step - loss: 0.8318 - var_explained: 0.1739 - val_loss: 0.8420 - val_var_explained: 0.1624
Epoch 5/100
61205/61205 [==============================] - 55s 899us/step - loss: 0.8190 - var_explained: 0.1854 - val_loss: 0.8296 - val_var_explained: 0.1719
Epoch 6/100
61205/61205 [==============================] - 55s 900us/step - loss: 0.8099 - var_explained: 0.1979 - val_loss: 0.8332 - val_var_explained: 0.1779
Epoch 7/100
61205/61205 [==============================] - 55s 902us/step - loss: 0.7944 - var_explained: 0.2122 - val_loss: 0.8246 - val_var_explained: 0.1844
Epoch 8/100
61205/61205 [==============================] - 55s 898us/step - loss: 0.7846 - var_explained: 0.2242 - val_loss: 0.8429 - val_var_explained: 0.1800
Epoch 9/100
61205/61205 [==============================] - 55s 899us/step - loss: 0.7831 - var_explained: 0.2324 - val_loss: 0.8144 - val_var_explained: 0.1911
Epoch 10/100
61205/61205 [==============================] - 55s 899us/step - loss: 0.7681 - var_explained: 0.2446 - val_loss: 0.8132 - val_var_explained: 0.1915
Epoch 11/100
61205/61205 [==============================] - 55s 899us/step - loss: 0.7558 - var_explained: 0.2576 - val_loss: 0.8215 - val_var_explained: 0.1958
Epoch 12/100
61205/61205 [==============================] - 55s 899us/step - loss: 0.7397 - var_explained: 0.2733 - val_loss: 0.8182 - val_var_explained: 0.1868
Epoch 13/100
61205/61205 [==============================] - 55s 899us/step - loss: 0.7221 - var_explained: 0.2911 - val_loss: 0.8235 - val_var_explained: 0.1914
Epoch 14/100
61205/61205 [==============================] - 55s 899us/step - loss: 0.7095 - var_explained: 0.3094 - val_loss: 0.8200 - val_var_explained: 0.1966
Epoch 15/100
61205/61205 [==============================] - 55s 901us/step - loss: 0.6958 - var_explained: 0.3258 - val_loss: 0.8235 - val_var_explained: 0.1777
Out[166]:
<keras.callbacks.History at 0x7f0551c6a0f0>
In [167]:
ypred_valid = reinitialized_model.predict(valid['inputs']['seq'])
In [168]:
ypred_valid.shape
Out[168]:
(19137, 3)
In [169]:
fig, axes = plt.subplots(len(assays), 1, figsize=(5, 11), sharex=True, sharey=True)
for i, (a, ax) in enumerate(zip(assays, axes)):
    regression_eval(ypred_valid[:,i], y_valid[:,i], alpha=0.05, task=a, ax=ax);
plt.tight_layout()