You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Originally posted by tdhock July 1, 2023
Hi! I will not be attending the sprint, but I had a couple of ideas related to improving efficiency of read.csv and write.csv.
Probably the more important issue to address would be read.csv, which had time complexity quadratic in number of columns, see this issue for some empirical analysis: tdhock/atime#8
Another issue was that write.csv uses linear memory, whereas other CSV writers use only constant memory (this is not that big of an issue though, because anyways you need linear memory to store the data in R before writing to CSV) tdhock/atime#10
Discussed in #7
Originally posted by tdhock July 1, 2023
Hi! I will not be attending the sprint, but I had a couple of ideas related to improving efficiency of read.csv and write.csv.
Probably the more important issue to address would be read.csv, which had time complexity quadratic in number of columns, see this issue for some empirical analysis:
tdhock/atime#8
Another issue was that write.csv uses linear memory, whereas other CSV writers use only constant memory (this is not that big of an issue though, because anyways you need linear memory to store the data in R before writing to CSV)
tdhock/atime#10
@gmbecker @bastistician may be able to help mentor? They worked on fixing a similar efficiency issue tdhock/atime#9
The text was updated successfully, but these errors were encountered: