Goal

  • develop an activity model predicting:
    • H3K27ac
    • Pol II
    • GRO-seq

metaplots html

Design descisions

  • dataset = all the chipnexus peaks
  • output = total number of counts in the +- 1kb of the region center

Tasks

  • [x] write the dataloader
  • [x] load all the data
  • [x] plot the response variable distribution (histogram, QQ-plots)
  • [x] get the predictions across all the regions from bpnet
    • [x] save to the hdf5 file
  • [x] train a simple model on top
  • [x] evaluate using simple scatterplots
  • [ ] exclude promoter-regions and re-train

Other tasks

  • [ ] fine-tune the whole model
  • [ ] get the importance scores
In [1]:
from basepair.imports import *
import pybedtools
from genomelake.extractors import BigwigExtractor
from basepair.extractors import StrandedBigWigExtractor, bw_extract
from kipoiseq.transforms import ResizeInterval
from basepair.modisco.table import ModiscoData
from pybedtools import BedTool
from basepair.cli.modisco import load_ranges
from basepair.plot.heatmaps import (heatmap_importance_profile, normalize, multiple_heatmap_stranded_profile,
                                    heatmap_stranded_profile)
from basepair.plot.profiles import plot_stranded_profile
import hvplot.pandas
from basepair.plots import regression_eval
Using TensorFlow backend.
In [2]:
create_tf_session(1)
Out[2]:
<tensorflow.python.client.session.Session at 0x7f23c99f51d0>

Load the data

In [3]:
df = pd.read_csv(f"{ddir}/processed/chipnexus/external-data.tsv", sep='\t')
df = df.set_index('assay')
In [4]:
df
Out[4]:
axis path
assay
DNase 0 /srv/scratch/avsec/wo...
DNase 0 /srv/scratch/avsec/wo...
DNase-HINT 0 /srv/scratch/avsec/wo...
... ... ...
Groseq 0 /srv/scratch/avsec/wo...
MNase-wt 0 /srv/scratch/avsec/wo...
MNase-h4 0 /srv/scratch/avsec/wo...

16 rows × 2 columns

Write the bed file

In [5]:
from basepair.cli.schemas import DataSpec

ds = DataSpec.load(f"{ddir}/processed/chipnexus/exp/models/oct-sox-nanog-klf-sall/dataspec-incl-Sall4.yml")

!mkdir -p {ddir}/processed/activity/data/

def read_factor(factor, filename):
    df = pd.read_table(filename, header=None, usecols=[0, 1, 2])
    df[3] = factor
    df.columns = ['chrom', 'start', 'end', 'name']
    return df

dfc = pd.concat([read_factor(k, ds.task_specs[k].peaks) for k in ds.task_specs], axis=0)

dfc.to_csv(f"{ddir}/processed/activity/data/peaks.bed", sep='\t', index=False, header=False)

Prepare the bigwigs

In [6]:
assays = ['H3K27ac', 'PolII', 'Groseq']
In [7]:
def tolist(s):
    if isinstance(s, str):
        return [s]
    else:
        return list(s)
In [8]:
bigwigs = {a: tolist(df.loc[a].path) for a in assays}
In [9]:
bigwigs
Out[9]:
{'H3K27ac': ['/srv/scratch/avsec/workspace/chipnexus/data/raw/2018-10-13-histone-chipseq-PMID-28483418/H3K27ac_ChIP-seq_WT_rep1_blacklisted.bw',
  '/srv/scratch/avsec/workspace/chipnexus/data/raw/2018-10-13-histone-chipseq-PMID-28483418/H3K27ac_ChIP-seq_WT_rep2_blacklisted.bw'],
 'PolII': ['/srv/scratch/avsec/workspace/chipnexus/data/raw/2018-10-13-histone-chipseq-PMID-28483418/PolII_ChIP-seq_WT_rep1_blacklisted.bw',
  '/srv/scratch/avsec/workspace/chipnexus/data/raw/2018-10-13-histone-chipseq-PMID-28483418/PolII_ChIP-seq_WT_rep2_blacklisted.bw'],
 'Groseq': ['/srv/scratch/avsec/workspace/chipnexus/data/raw/2018-10-15-groseq-from-melanie/GRO-seq_WT_1_blacklisted.bw']}

Write the dataloader

In [10]:
from basepair.config import valid_chr, test_chr

from basepair.datasets import *
In [11]:
dl = ActivityDataset(f"{ddir}/processed/activity/data/peaks.bed", ds.fasta_file, bigwigs, 
                     excl_chromosomes=valid_chr + test_chr)
In [12]:
# load all
train = dl.load_all(num_workers=10)
100%|██████████| 2646/2646 [00:17<00:00, 152.96it/s]
In [13]:
dfy = pd.DataFrame(train['targets'])
In [14]:
valid = ActivityDataset(f"{ddir}/processed/activity/data/peaks.bed", ds.fasta_file, bigwigs, 
                        incl_chromosomes=valid_chr).load_all(num_workers=10)
100%|██████████| 832/832 [00:05<00:00, 154.71it/s]
In [15]:
dfy_valid = pd.DataFrame(valid['targets'])
In [16]:
test = ActivityDataset(f"{ddir}/processed/activity/data/peaks.bed", ds.fasta_file, bigwigs, 
                       incl_chromosomes=test_chr).load_all(num_workers=10)
100%|██████████| 781/781 [00:05<00:00, 146.65it/s]
In [17]:
dfy_test = pd.DataFrame(test['targets'])
In [18]:
dfy.head()
Out[18]:
H3K27ac PolII Groseq
0 72177.0 230052.0 1577.0
1 440983.0 164513.0 320665.0
2 135278.0 83775.0 228978.0
3 44406.0 22818.0 77130.0
4 246385.0 90010.0 282323.0
In [19]:
len(dl)
Out[19]:
84659

Output distribution

Histograms

QQ-plots

In [20]:
import scipy.stats as stats
In [21]:
%matplotlib inline
paper_config()
In [22]:
fig, axes = plt.subplots(1, len(assays), figsize=(9, 3), sharex=True, sharey=True)
for a, ax in zip(assays, axes):
    stats.probplot(np.log10(1 + dfy[a]), dist="norm", plot=ax);
    ax.set_title(a)
plt.tight_layout()

Get bottleneck predictions

In [23]:
from basepair.BPNet import BPNetPredictor
from keras.models import Model, Sequential
import keras.layers as kl
In [24]:
mdir = f"{ddir}/processed/chipnexus/exp/models/osnk-pstat-sall-smad-zfp/models/c_task_weight=5,filters=128/"
In [25]:
bpnet = BPNetPredictor.from_mdir(mdir)
WARNING:tensorflow:From /users/avsec/bin/anaconda3/envs/chipnexus/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py:497: calling conv1d (from tensorflow.python.ops.nn_ops) with data_format=NHWC is deprecated and will be removed in a future version.
Instructions for updating:
`NHWC` for data_format is deprecated, use `NWC` instead
2019-02-16 12:11:26,811 [WARNING] From /users/avsec/bin/anaconda3/envs/chipnexus/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py:497: calling conv1d (from tensorflow.python.ops.nn_ops) with data_format=NHWC is deprecated and will be removed in a future version.
Instructions for updating:
`NHWC` for data_format is deprecated, use `NWC` instead
WARNING:tensorflow:From /users/avsec/bin/anaconda3/envs/chipnexus/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
2019-02-16 12:11:38,121 [WARNING] From /users/avsec/bin/anaconda3/envs/chipnexus/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
In [26]:
bpnet.tasks
Out[26]:
['Oct4', 'Sox2', 'Nanog', 'Klf4', 'Pstat3', 'Sall4', 'Smad3', 'Zfp281']
In [27]:
bpnet.model.get_layer("add_9").output
Out[27]:
<tf.Tensor 'add_9/add_8:0' shape=(?, 1000, 128) dtype=float32>
In [28]:
bottleneck_model = Model(bpnet.model.inputs, bpnet.model.get_layer("add_9").output)
In [29]:
bottleneck_predictions = bottleneck_model.predict(train['inputs'], batch_size=32, verbose=1)
84659/84659 [==============================] - 47s 554us/step
In [30]:
bottleneck_predictions_valid = bottleneck_model.predict(valid['inputs'], batch_size=32, verbose=1)
bottleneck_predictions_test = bottleneck_model.predict(test['inputs'], batch_size=32, verbose=1)
26615/26615 [==============================] - 15s 557us/step
24971/24971 [==============================] - 14s 545us/step
In [31]:
bottleneck_predictions.shape
Out[31]:
(84659, 1000, 128)
In [40]:
n_filters = 128
In [41]:
top_model = Sequential([
    kl.GlobalAvgPool1D(input_shape=(1000, n_filters)),
    kl.Dense(3)
])
In [42]:
top_model = Sequential([
    kl.MaxPool1D(pool_size=50, input_shape=(1000, n_filters)),
    kl.Flatten(),
    kl.Dense(64, activation='relu'),
    # kl.Dropout(.5)
    kl.Dense(3)
])
In [43]:
from concise.metrics import var_explained
from keras.callbacks import EarlyStopping
from sklearn.preprocessing import StandardScaler
In [44]:
top_model.compile("adam", 'mse', metrics=[var_explained])
In [45]:
preproc = StandardScaler()
In [46]:
y = preproc.fit_transform(np.log10(1 + dfy).values)
y_valid = preproc.transform(np.log10(1 + dfy_valid).values)
y_test = preproc.transform(np.log10(1 + dfy_test).values)
In [ ]:
top_model.fit(bottleneck_predictions, y, batch_size=512,
              epochs=100,
              validation_data=(bottleneck_predictions_valid, y_valid),
              callbacks=[EarlyStopping(patience=5)]
             )
Train on 84659 samples, validate on 26615 samples
Epoch 1/100
84659/84659 [==============================] - 80s 948us/step - loss: 0.9807 - var_explained: 0.0350 - val_loss: 0.8350 - val_var_explained: 0.1760
Epoch 2/100
84659/84659 [==============================] - 65s 766us/step - loss: 0.8146 - var_explained: 0.1933 - val_loss: 0.8230 - val_var_explained: 0.1855
Epoch 3/100
84659/84659 [==============================] - 70s 829us/step - loss: 0.8014 - var_explained: 0.2067 - val_loss: 0.8188 - val_var_explained: 0.1895
Epoch 4/100
84659/84659 [==============================] - 65s 773us/step - loss: 0.7912 - var_explained: 0.2151 - val_loss: 0.8316 - val_var_explained: 0.1925
Epoch 5/100
84659/84659 [==============================] - 71s 840us/step - loss: 0.7868 - var_explained: 0.2218 - val_loss: 0.8289 - val_var_explained: 0.1911
Epoch 6/100
57856/84659 [===================>..........] - ETA: 15s - loss: 0.7833 - var_explained: 0.2256

Evaluate

In [50]:
a=1
In [ ]:
ypred_valid = top_model.predict(bottleneck_predictions_valid)
In [52]:
ypred_valid.shape
Out[52]:
(26615, 3)
In [53]:
fig, axes = plt.subplots(len(assays), 1, figsize=(5, 11), sharex=True, sharey=True)
for i, (a, ax) in enumerate(zip(assays, axes)):
    regression_eval(ypred_valid[:,i], y_valid[:,i], alpha=0.05, task=a, ax=ax);
plt.tight_layout()

Fine-tune

In [122]:
whole_model = Sequential([bottleneck_model, top_model])
In [123]:
whole_model.compile("adam", 'mse', metrics=[var_explained])
In [127]:
whole_model.fit(train['inputs']['seq'], y, batch_size=512,
                epochs=100,
                validation_data=(valid['inputs']['seq'], y_valid),
                callbacks=[EarlyStopping(patience=5)])
Train on 61205 samples, validate on 19137 samples
Epoch 1/100
61205/61205 [==============================] - 55s 899us/step - loss: 0.6910 - var_explained: 0.3158 - val_loss: 0.8573 - val_var_explained: 0.1735
Epoch 2/100
61205/61205 [==============================] - 55s 898us/step - loss: 0.6619 - var_explained: 0.3485 - val_loss: 0.8192 - val_var_explained: 0.1749
Epoch 3/100
61205/61205 [==============================] - 55s 900us/step - loss: 0.6298 - var_explained: 0.3824 - val_loss: 0.8426 - val_var_explained: 0.1515
Epoch 4/100
61205/61205 [==============================] - 55s 901us/step - loss: 0.5947 - var_explained: 0.4155 - val_loss: 0.9512 - val_var_explained: 0.1563
Epoch 5/100
61205/61205 [==============================] - 55s 901us/step - loss: 0.5648 - var_explained: 0.4474 - val_loss: 0.8477 - val_var_explained: 0.1414
Epoch 6/100
61205/61205 [==============================] - 55s 902us/step - loss: 0.5450 - var_explained: 0.4743 - val_loss: 0.8559 - val_var_explained: 0.1323
Epoch 7/100
61205/61205 [==============================] - 55s 901us/step - loss: 0.5167 - var_explained: 0.5014 - val_loss: 0.8765 - val_var_explained: 0.1221
Out[127]:
<keras.callbacks.History at 0x7f08c54f6400>
In [128]:
ypred_valid = whole_model.predict(valid['inputs']['seq'])
In [129]:
ypred_valid.shape
Out[129]:
(19137, 3)
In [130]:
fig, axes = plt.subplots(len(assays), 1, figsize=(5, 11), sharex=True, sharey=True)
for i, (a, ax) in enumerate(zip(assays, axes)):
    regression_eval(ypred_valid[:,i], y_valid[:,i], alpha=0.05, task=a, ax=ax);
plt.tight_layout()

Train from scratch

In [141]:
import keras.backend as K
from keras.models import load_model

def reset_weights(model):
    session = K.get_session()
    for layer in model.layers: 
        if hasattr(layer, 'kernel_initializer'):
            layer.kernel.initializer.run(session=session)
In [137]:
whole_model.save(f"{ddir}/processed/activity/models/dense/fine-tuned.h5")
In [145]:
reinitialized_model = load_model(f"{ddir}/processed/activity/models/dense/fine-tuned.h5")
In [163]:
reset_weights(reinitialized_model.layers[0])
reset_weights(reinitialized_model.layers[1])
In [165]:
reinitialized_model.compile("adam", 'mse', metrics=[var_explained])
In [166]:
reinitialized_model.fit(train['inputs']['seq'], y, batch_size=512,
                epochs=100,
                validation_data=(valid['inputs']['seq'], y_valid),
                callbacks=[EarlyStopping(patience=5)])
Train on 61205 samples, validate on 19137 samples
Epoch 1/100
61205/61205 [==============================] - 58s 942us/step - loss: 1.2780 - var_explained: -0.2762 - val_loss: 0.9018 - val_var_explained: 0.0977
Epoch 2/100
61205/61205 [==============================] - 55s 895us/step - loss: 0.8525 - var_explained: 0.1490 - val_loss: 0.8654 - val_var_explained: 0.1464
Epoch 3/100
61205/61205 [==============================] - 55s 898us/step - loss: 0.8386 - var_explained: 0.1638 - val_loss: 0.8479 - val_var_explained: 0.1573
Epoch 4/100
61205/61205 [==============================] - 55s 897us/step - loss: 0.8318 - var_explained: 0.1739 - val_loss: 0.8420 - val_var_explained: 0.1624
Epoch 5/100
61205/61205 [==============================] - 55s 899us/step - loss: 0.8190 - var_explained: 0.1854 - val_loss: 0.8296 - val_var_explained: 0.1719
Epoch 6/100
61205/61205 [==============================] - 55s 900us/step - loss: 0.8099 - var_explained: 0.1979 - val_loss: 0.8332 - val_var_explained: 0.1779
Epoch 7/100
61205/61205 [==============================] - 55s 902us/step - loss: 0.7944 - var_explained: 0.2122 - val_loss: 0.8246 - val_var_explained: 0.1844
Epoch 8/100
61205/61205 [==============================] - 55s 898us/step - loss: 0.7846 - var_explained: 0.2242 - val_loss: 0.8429 - val_var_explained: 0.1800
Epoch 9/100
61205/61205 [==============================] - 55s 899us/step - loss: 0.7831 - var_explained: 0.2324 - val_loss: 0.8144 - val_var_explained: 0.1911
Epoch 10/100
61205/61205 [==============================] - 55s 899us/step - loss: 0.7681 - var_explained: 0.2446 - val_loss: 0.8132 - val_var_explained: 0.1915
Epoch 11/100
61205/61205 [==============================] - 55s 899us/step - loss: 0.7558 - var_explained: 0.2576 - val_loss: 0.8215 - val_var_explained: 0.1958
Epoch 12/100
61205/61205 [==============================] - 55s 899us/step - loss: 0.7397 - var_explained: 0.2733 - val_loss: 0.8182 - val_var_explained: 0.1868
Epoch 13/100
61205/61205 [==============================] - 55s 899us/step - loss: 0.7221 - var_explained: 0.2911 - val_loss: 0.8235 - val_var_explained: 0.1914
Epoch 14/100
61205/61205 [==============================] - 55s 899us/step - loss: 0.7095 - var_explained: 0.3094 - val_loss: 0.8200 - val_var_explained: 0.1966
Epoch 15/100
61205/61205 [==============================] - 55s 901us/step - loss: 0.6958 - var_explained: 0.3258 - val_loss: 0.8235 - val_var_explained: 0.1777
Out[166]:
<keras.callbacks.History at 0x7f0551c6a0f0>
In [167]:
ypred_valid = reinitialized_model.predict(valid['inputs']['seq'])
In [168]:
ypred_valid.shape
Out[168]:
(19137, 3)
In [169]:
fig, axes = plt.subplots(len(assays), 1, figsize=(5, 11), sharex=True, sharey=True)
for i, (a, ax) in enumerate(zip(assays, axes)):
    regression_eval(ypred_valid[:,i], y_valid[:,i], alpha=0.05, task=a, ax=ax);
plt.tight_layout()