-
-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Worry the checker is actually downloading the content at the checked URL #106
Comments
We should do head instead of get, I agree. We haven't looked into it but can. |
Would you like me to update the branch we are working on to try it out? |
ChatGPT suggests something along the lines of
|
Oh geez chatGPT? 🙃 The first lines I see issues:
The retval and response.close don’t make sense. I appreciate the suggestion but I don’t think the quality of code from AI tools is very good. It’s mostly copy pasting some poor souls code from somewhere else on GitHub. I’m happy to write this with my own knowledge and careful inspection of core docs and library code to get the functionality I want. |
But I have to take it back - it does look like response.close() is useful for requests.get() ! Geez, I've been writing in Python a long time and I just don't see it very often. So I learned something from ChatGPT! I appreciate the post, and I'll try to be more open minded about it (even if I don't use it)! |
So, I just happened to see this Q&A with Linus Torvalds about AI tools in coding... |
haha I totally watched that! I'll watch again tonight with new context. |
So, the more I think about this, the more I think |
Agree! Let me work hard today (just presented at FOSDEM) and maybe I can do some work on this later if I'm productive! |
For some reason, this goes pretty slowly. I am working from this document and it takes quite a while to complete a check. Next, I notice that on
.pdf
files, it stalls for longer, especially the one at theftp
link.So, this has me worried that it is actually fully getting the content to check the link. I've seen similar in the Sphinx URL checking feature too. It should really just be get a header at each URL and not the full content.
Is this something you've looked into?
The text was updated successfully, but these errors were encountered: