-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to recognize blank, recognize English and Chinese in one model #48
Comments
Hi,
|
Thank you so much for your help. |
Try to give the label all#the#recognition#accuracies#on#the replace all the blanks with #, and put the word # in alphabets.py So, when there is blank, the net will output #, and you can replace # with blank, you will get normal sentences. You can try as this, but i am not sure about it. |
thanks for your reply. |
@Holmeyoung the following are training progress: you can find that it works. |
@cvchongci Hi, I am also having problems with the space between words in English. could you please share your model ??? thanks!!! |
@ducbluee Hi, I used very limited synthetic data to train the model. so the model does not work well on real-world images. |
Firstly, you codes are great. I trained with SynthText90k dataset and achieved very good performance on English words.
there are several questions. hopefully you can give me a hand. Thank you very much.
thanks for your time.
How to recognize blank in one sentence?
for example,I want to recognize "I love python"
there is blank between I and love. how to handle this problem?
just add blank in alphabet? like this? and prepare for the training data
alphabet = """0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ """
Can we recognize English and Chinese in one model?
if we want to recognize English and Chinese in one model, how to do?
just make alphabet contain all English and Chinese characters? just like this?
alphabet = """0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ是不我一有大在人了中到資..."""
if we want to recognize very long sentence?
do you think it would be better to train with very long sentences or we can just train with short sentence?
because your current model only support text length less than 26. so have to modify the network if I want to support training with long sentence.
The text was updated successfully, but these errors were encountered: