Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Your prompts are clearly malformed #1

Open
DaniilSarkisyan opened this issue Nov 30, 2024 · 1 comment
Open

Your prompts are clearly malformed #1

DaniilSarkisyan opened this issue Nov 30, 2024 · 1 comment

Comments

@DaniilSarkisyan
Copy link

I am not a stellar prompt engineer, but I use chatGPT for my bioinformatic work in Uppsala University, Sweden.
I was alarmed to read that you got 150 out of 200 references wrong... I thought perhaps OpenAI introduced another glitch, which could affect my work too.

So I looked into your "experimentResults.csv" and immediately found a number of issues with your analysis:

  1. Did you fake the "Conversation Link" values?
    I have chatGPT Plus plan, but first three links with 0 "Searchability Score", namely
  https://chatgpt.com/share/e/67406721-8468-8000-944d-ba976e29e0fc
  https://chatgpt.com/share/e/673e23d8-e7c4-8004-b870-ee55adaad017
  https://chatgpt.com/share/e/673e2429-5698-8004-8d60-6aa415c74e6c

errored with "Conversation inaccessible or not found".
I did not check your other link, but I suspect they are also fake.

  1. I looked into your first prompt with 0 "Searchability Score" and it is clearly MALFORMED.
    Did you intentionally violated all "prompt engineering guides" to give chatGPT the most adversarial prompt?
    But in this case your article should mention that you optimazed your prompts to be adversarial, right?

Here is the example of the prompt https://chatgpt.com/share/674acf6b-0934-8001-8f10-87de68cbab16 investigating the same piece of news and identifying the CORRECT source.

In case chatgpt.com/share/... will not work after some days, here is the prompt template you may use, if you really want to write an unbiased review of current methods to dig for news on the web:

Please do a comprehensive fact-check for the news attached below.
Follow this instruction:

1. Search the web and use chain-of-thought to find these news from reliable publisher.

2. If you can not find these news, start your answer with "I can not find these news".

If you find only news contradicting the piece to fact-check, start your answer with "These news are likely fake".

If you find both similar and contradictory news, please start your answer with "Pro and contra news are found" and report all sides of the story in your answer.

3. If you found several similar news, print references to no more then four most reliable sources, following this format:
---
Publication date, URL in plan text, your estimate of reference reliability (very reliable / reliable / reliability unknown / likely fake)
---

4. Finally, please cite from the most reliable reference, print the broader context window and in your quote highlight the part semantically similar to the piece news being fact-checked.

This is the piece of news to fact-check:
----
... insert the piece of news for which you want the reference(s) investigated ...
----
@ppival
Copy link

ppival commented Dec 4, 2024

I love your instructions, @DaniilSarkisyan and will add them to my grimoire. But to be fair, NO everyday user will be informed enough to prompt SearchGPT like this. I, too, am curious about the dead conversation links, but suspect simply pasting in quotes is much more indicative of the search behaviour of a regular user of the product :-(

Your link appears to go to ChatGPT, not SearchGPT? (I don't currently subscribe, so maybe it reverts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants