You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To make your repositories easier to reference in academic literature, you can create persistent identifiers, also known as Digital Object Identifiers (DOIs). You can use the data archiving tool Zenodo to archive a repository on GitHub.com and issue a DOI for the archive.
Zenodo archives your repository and issues a new DOI each time you create a new GitHub release. Follow the steps at "Managing releases in a repository" to create a new one.
considering the above and the rate at which our projects might do a "release" keep in mind that
You can create releases to bundle and deliver iterations of a project to users.
If downloads from COL Archive Repository works, maybe this would be the natural repository.
We could export to ChecklistBank in a more frequent period (4 times a year)
If downloads from COL Archive Repository works, maybe this would be the natural repository. We could export to ChecklistBank in a more frequent period (4 times a year)
@MMCigliano I don't know (since it didn't work). I can't see what it contains.
Also, considering points made by @mjy about connecting "the data" and "the web pages" we need to see what's inside the COL to judge if it might work. For example, I can't see "the metadata" included in the COL file. More research needed. I'll be posting a ticket to the COL github repo to see if Markus Doering can fix the download.
Update @mjy@MMCigliano for some reason (unknown to me): download of COL archives works for Geoff, fails on my computer. So you might both try (so we can "see" what's in them). And I'll suss out more later, why not working for me.
redundancy in data storage and archiving is always good. However, just a reminder that OSF exceeds Zenodo and COL by far, reaching back to the 1990s when Daniel Otte started to establish a digital list of types and references. I think that this supremacy in data continuity should be maintained. It might be complemented by strategies of offline longterm digital storage.
OSF is seeking to provide an example of best-practices defined by the COL. One of those best-practices is "archving".
The two targets for archiving I feel we should pursue first, in order, are
Additional exploration could target GItHub's policy on archiving.
Can we
Once the archive is live
The text was updated successfully, but these errors were encountered: