Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wasp ai #1259

Merged
merged 221 commits into from
Dec 11, 2023
Merged

Wasp ai #1259

merged 221 commits into from
Dec 11, 2023

Conversation

Martinsos
Copy link
Member

@Martinsos Martinsos commented Jun 16, 2023

Two big parts:

  1. CLI now has offers additional option when you type wasp new command, which if chosen, generates wasp app on the disk using ChatGPT. It is marked as experimental.
  2. CLI also has a new command now, wasp new-ai-machine, which is to be used by Wasp AI web app and is not listed in the CLI usage/help. It prints everything to stdout (logs and files it creates) in the format that machine can parse.
  3. We have wasp-ai/ web app (in Wasp) that calls wasp CLI and that way generates wasp app via ChatGPT. It is a UI for this new feature.

Give it a look, play with it, test a bit!

There is a bunch of TODOs in there that we yet have to take care of, but the core logic is here and it works!

TODO:

  • Merge with main branch.
  • Docs?
  • Changelog.
  • Repeat chatGPT request if timeout happened.
    I wonder why timeout happens though. Is it possible that it is enforced by our library that we use to invoke an http request? If so, should we prolong its timeout maybe?
    example error response wasp-cli: HttpExceptionRequest Request { host = "api.openai.com" port = 443 secure = True requestHeaders = [("Accept","application/json"),("Content-Type","application/json; charset=utf-8"),("Authorization","")] path = "/v1/chat/completions" queryString = "" method = "POST" proxy = Nothing rawBody = False redirectCount = 10 responseTimeout = ResponseTimeoutDefault requestVersion = HTTP/1.1 proxySecureMode = ProxySecureWithConnect } ResponseTimeout
  • Repeat chatGPT request if invalid JSON was returned.
    IDEA: We could send just JSON as a single, standalone chat message and tell it to fix it. Or, we could send whole conversation that was used to generate JSON + add new message to it, from the Assistant, where it returns JSON, and then one more message from us saying "hey this JSON is invalid, pls fix it".
  • Send files for repair (ideally with attached (compile) errors).
  • When generating fails it puts status as success?
    Fixed on main and merged back, this was general bug.
  • Improve error message when OpenAI API key is missing.

Things to fix that chatGPT currently often gets wrong while generating Wasp app (mostly by prompt engineering) + ideas:

  • Sometimes it still omits User entity, especially if user is not mentioned in the app description.
  • Sometimes it omits to create any actions / queries and just goes for a very simple app, usually if description is too simple.
  • It often forgets to put , after fn field in query/action, and before entities field.
  • It relatively often, while generating JS implementations of queries/actions, doesn't put actual implementation, but puts a comment "// implementation goes here" or smth like that.
  • Every so, it tries to import prisma client in the operation file and use that directly instead of context. I believe that is because I mentioned that context.<Entity> is really a prisma thing? But I do want it know that hm.
  • We had occurence of it leaving escaped "\n"'s in User entity psl definition.
    This might however be a mistake on our side, of incorrectly using the response from chatGPT and doing one show too much or smth like that, so let's look into that first.
  • It generated ext import in action with just file path with no quotes even:
    fn: @server/actions.js
  • Sometimes messes up relationship between User and some other entity, implementing only one half of that relationship.
  • Once it generated action as just action deleteNote { ... }.
  • In general, it loves putting ... in places. That is probably because it saw that in our examples. We should not put ... in our example then.
  • Should we try using multiple messages instead of one big message? Would that somehow help to emphasize certain things? For example, general context in first message, more precise instructions in another message?
  • Our queries currently consume max 2k tokens, even with the response included and JSON repairs. We have 16k tokens available. We should try enriching our chat messages with more examples and instructions.
  • We could have Wasp AI use ChatGPT4 to generate Plan, if ChatGPT4 is available as a model via the API.

Web app:

  • progress bar with current action being performed
  • show a couple of last messages
  • find more good ways to indicate progress / what is happening (files could flash on change, progress bar also next to files, maybe grey files for the files that are yet coming, ...)
  • download zip should become -> "run app" with two fold instructions: install wasp + download zip
  • Deploy the app! Will need to add custom Dockerfile that installs Wasp.

* Martech

* Record zip downloads

* fix

---------

Co-authored-by: Martin Sosic <[email protected]>
Co-authored-by: Martin Šošić <[email protected]>
infomiho and others added 8 commits November 2, 2023 13:31
* add modals

* keep GPT in title and meta tags

* add trycatch blocks to JSON methods

* remove alert from localstorage call

* check prev state for modal

* update og:title

* add login and profile page

* check if user has starred our repo

* change cover fotos

* remove GH api star check

* make more sub-components and shared logic

* add return statement

* update header & renaming

* add project converter functions

* rename the renaming of renamed names :)

* remove comments & order user projects
Signed-off-by: Mihovil Ilakovac <[email protected]>
* add delete user functionality

* refactor to deleteMyself

* delete user relevant info from projects

* update status to deleted
@Martinsos Martinsos merged commit 47c4b8c into main Dec 11, 2023
4 checks passed
@Martinsos Martinsos deleted the wasp-ai branch December 11, 2023 19:31
@Martinsos Martinsos restored the wasp-ai branch December 14, 2023 11:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants