Building a database for LLM assistance with quest #1386
Replies: 2 comments 2 replies
-
I would be surprised if you could get people to contribute 100 examples here, especially as you're not offering very much in return. Is your plan to create a monetized custom GPT with this? A lot of existing Quest knowledge has likely been ingested by the LLM providers already, but I think you would have better results by fine-tuning a model using what already exists - the forum here, the documentation and in Libraries and Code Samples in the old forum. It shouldn't "need" one big dedicated list of examples with explanations when the data already exists and the LLM can do a good job of parsing it all. I was playing around with NotebookLM which has an impressively large context window - it will take up to 50 sources in the free version, each of which can be pretty large. You should be able to combine existing Quest docs and samples together into something that will fit, and it would be interesting to see what it can come up with. |
Beta Was this translation helpful? Give feedback.
-
Separate conversational tangent, but I'd be interested what "monotonous tasks that are naturally part of development in quest" means. That might point to ways we can make Quest itself better in future, rather than relying on AI to make things easier. |
Beta Was this translation helpful? Give feedback.
-
Im asking you all something that some of you may not really care for. some may call it 'cheating', but I am going to take the time to build a dataset for an LLM that could assist with monotonous tasks that are naturally part of development in quest.
in an ideal situation, the LLM could provide aslx snippets, and tell the user whereto place them.
the model would also be able to output scripts to meet the users desired functionality.
it is not uncommon for AI to come up with much simpler solutions to complex problems.
the model could take a load off when the user want to get to creating
it might help new users stay engaged
now:
it would hopefully be used to train a local model( or GPT if convenience is too much of a concern).
in the event of:
in the event GPT is used I will give out an open API key with US$10 per month for a year( US$ 120 dollars total) to the winner of a draw.
to enter the draw
OR
The draw would be held once 100 examples in total are received
for transparency, we will put those code samples here.
I will give the model, or the fully formatted dataset (depending on model ease of use) in this discussion at the time of training a model with 100 examples under the condition that the model can be fairly accurate.
Beta Was this translation helpful? Give feedback.
All reactions