You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The "dt-python-parser" package works really well, I use "getAllTokens" function to get all the tokens from python text file and then filter only type 93 (INDENT) and type 94 (DEDENT).
I need only INDENT and DEDENT locations, nothing more, but the "getAllTokens" function tokenizes everything and therefore losing much time, so if the file has more than 1000 lines, like 5000 lines or so, getAllTokens function takes many seconds to return the value, so rendering blocks take more time and waiting is not very comfortable for a user.
So, for the optimization, can I use the parser/tokenizer in such way that it would parse only INDENT/DEDENT locations and nothing more?
The text was updated successfully, but these errors were encountered:
Your feedback on the efficiency of the package is one of the highlights of our Roadmap for future releases, and we will consider adding filtering support to "getAllTokens" in the future release
Hello again, It's been 8 months since I posted this feature request. I would like to ask you, is there now any plans to implement filter option for getAllTokens function? It would be super super good. Currently it takes about 10 seconds to tokenize 10,000 line file, it is too long, I need only INDENT and DEDENT tokens, but it tokenizes everything with the sacrifice of speed.
Hello, I'm working on my VS Code extension, blockman, it renders blocks based on nested code blocks to make it easier to visually perceive the code.
video:
https://youtu.be/2Ajh8WQJvHs
The "dt-python-parser" package works really well, I use "getAllTokens" function to get all the tokens from python text file and then filter only type 93 (INDENT) and type 94 (DEDENT).
I need only INDENT and DEDENT locations, nothing more, but the "getAllTokens" function tokenizes everything and therefore losing much time, so if the file has more than 1000 lines, like 5000 lines or so, getAllTokens function takes many seconds to return the value, so rendering blocks take more time and waiting is not very comfortable for a user.
So, for the optimization, can I use the parser/tokenizer in such way that it would parse only INDENT/DEDENT locations and nothing more?
The text was updated successfully, but these errors were encountered: