You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I installed teh ESM,I can easy fllow this code:
import torch
import esm
# Load ESM-2 model
model, alphabet = esm.pretrained.esm2_t33_650M_UR50D()
batch_converter = alphabet.get_batch_converter()
model.eval() # disables dropout for deterministic results
# Prepare data (first 2 sequences from ESMStructuralSplitDataset superfamily / 4)
data = [
("protein1", "MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG"),
("protein2", "KALTARQQEVFDLIRDHISQTGMPPTRAEIAQRLGFRSPNAAEEHLKALARKGVIEIVSGASRGIRLLQEE"),
("protein2 with mask","KALTARQQEVFDLIRD<mask>ISQTGMPPTRAEIAQRLGFRSPNAAEEHLKALARKGVIEIVSGASRGIRLLQEE"),
("protein3", "K A <mask> I S Q"),
]
batch_labels, batch_strs, batch_tokens = batch_converter(data)
batch_lens = (batch_tokens != alphabet.padding_idx).sum(1)
# Extract per-residue representations (on CPU)
with torch.no_grad():
results = model(batch_tokens, repr_layers=[33], return_contacts=True)
token_representations = results["representations"][33]
# Generate per-sequence representations via averaging
# NOTE: token 0 is always a beginning-of-sequence token, so the first residue is token 1.
sequence_representations = []
for i, tokens_len in enumerate(batch_lens):
sequence_representations.append(token_representations[i, 1 : tokens_len - 1].mean(0))
# Look at the unsupervised self-attention map contact predictions
import matplotlib.pyplot as plt
for (_, seq), tokens_len, attention_contacts in zip(data, batch_lens, results["contacts"]):
plt.matshow(attention_contacts[: tokens_len, : tokens_len])
plt.title(seq)
plt.show()
But when I fllowed next code:ESMFold Structure Prediction
import torch
import esm
model = esm.pretrained.esmfold_v1()
model = model.eval().cuda()
# Optionally, uncomment to set a chunk size for axial attention. This can help reduce memory.
# Lower sizes will have lower memory requirements at the cost of increased speed.
# model.set_chunk_size(128)
sequence = "MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG"
# Multimer prediction can be done with chains separated by ':'
with torch.no_grad():
output = model.infer_pdb(sequence)
with open("result.pdb", "w") as f:
f.write(output)
import biotite.structure.io as bsio
struct = bsio.load_structure("result.pdb", extra_fields=["b_factor"])
print(struct.b_factor.mean()) # this will be the pLDDT
# 88.3
I got :
Traceback (most recent call last):
File "/home/amax/biodata/esm_test/test_esmfold2.py", line 15, in <module>
output = model.infer_pdb(sequence)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/amax/miniconda3/lib/python3.11/site-packages/esm/esmfold/v1/esmfold.py", line 312, in infer_pdb
return self.infer_pdbs([sequence], *args, **kwargs)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/amax/miniconda3/lib/python3.11/site-packages/esm/esmfold/v1/esmfold.py", line 307, in infer_pdbs
output = self.infer(seqs, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/amax/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/amax/miniconda3/lib/python3.11/site-packages/esm/esmfold/v1/esmfold.py", line 282, in infer
output = self.forward(
^^^^^^^^^^^^^
File "/home/amax/miniconda3/lib/python3.11/site-packages/esm/esmfold/v1/esmfold.py", line 180, in forward
structure: dict = self.trunk(
^^^^^^^^^^^
File "/home/amax/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/amax/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/amax/miniconda3/lib/python3.11/site-packages/esm/esmfold/v1/trunk.py", line 203, in forward
structure = self.structure_module(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/amax/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/amax/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/amax/miniconda3/lib/python3.11/site-packages/openfold/model/structure_module.py", line 594, in forward
mask = s.new_ones(s.shape[:-1])
^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'new_ones'
I used openfoldv1.0.0 not the latest version!!!
Could any teacher tell me how to solve this problem?
Thanks!!!!
The text was updated successfully, but these errors were encountered:
When I installed teh ESM,I can easy fllow this code:
But when I fllowed next code:ESMFold Structure Prediction
I got :
I used openfoldv1.0.0 not the latest version!!!
Could any teacher tell me how to solve this problem?
Thanks!!!!
The text was updated successfully, but these errors were encountered: