How to Increase Consistency When Asking the Same Question Multiple Times? #624
Unanswered
iridescentpeo
asked this question in
Q&A
Replies: 1 comment
-
There's some inherent randomness with LLMs. One possibility to mitigate this is to use function calling, which is what happens with Function RAG. Another possibility is to do something like this -- basically generate SQL 3 times and check if any 2 match and return the matching ones: first_sql_attempt = vn.generate_sql(question)
second_sql_attempt = vn.generate_sql(question)
if first_sql_attempt == second_sql_attempt: # Instead of testing for text equivalence you could also check for semantic equivalence by running the SQL queries and then comparing the resulting dataframes
return first_sql_attempt
third_sql_attempt = vn.generate_sql(question)
if first_sql_attempt == third_sql_attempt:
return first_sql_attempt
if second_sql_attempt == third_sql_attempt:
return second_sql_attempt
return "No reliable SQL could be generated" |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Currently, we have added a lot of training data, but sometimes we find that when asking Vanna the same question multiple times, it may generate different SQL queries, leading to inconsistent results. This inconsistency is not conducive to our actual business needs. How can we increase the consistency when asking the same question multiple times?
Beta Was this translation helpful? Give feedback.
All reactions