Skip to content

Commit

Permalink
Deployed 40c62f6 with MkDocs version: 1.1.2
Browse files Browse the repository at this point in the history
  • Loading branch information
Unknown committed Nov 25, 2024
1 parent bf736a1 commit d336d96
Show file tree
Hide file tree
Showing 3 changed files with 19 additions and 13 deletions.
30 changes: 18 additions & 12 deletions misc/pelican/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@
<div data-md-component="skip">


<a href="#uprade-of-osdf-to-pelican-for-collaborations" class="md-skip">
<a href="#uprade-of-the-osdf-origin-to-pelican-for-collaborations" class="md-skip">
Skip to content
</a>

Expand Down Expand Up @@ -444,8 +444,8 @@
<ul class="md-nav__list" data-md-component="toc" data-md-scrollfix>

<li class="md-nav__item">
<a href="#uprade-of-osdf-to-pelican-for-collaborations" class="md-nav__link">
Uprade of OSDF to Pelican for Collaborations
<a href="#uprade-of-the-osdf-origin-to-pelican-for-collaborations" class="md-nav__link">
Uprade of the OSDF origin to Pelican for Collaborations
</a>

</li>
Expand All @@ -465,19 +465,25 @@

<h1>Pelican</h1>

<h2 id="uprade-of-osdf-to-pelican-for-collaborations">Uprade of OSDF to Pelican for Collaborations<a class="headerlink" href="#uprade-of-osdf-to-pelican-for-collaborations" title="Permanent link">&para;</a></h2>
<p>The OSG Collab AP had an OSDF door deployed on the access point that provided users with authenticated access to a Ceph storage cluster for shared projects from HTCondor jobs to the OSPool running at remote Execution Points (EPs). Users running jobs on the OSPool access the filesystem either via a client tool (stashcp) or via an HTCondor plugin that is invoked in their submit scripts. The storage is also mounted on the AP at /ospool/uc-shared/project. </p>
<p>The general purpose documentation found here (https://portal.osg-htc.org/documentation/htc_workloads/managing_data/osdf/) is also applicable for the users on the OSG Collab AP. It describes using the HTCondor plugin to move data to and from the OSDF Origin.
In a nutshell, if you are not using the client tool then:</p>
<h2 id="uprade-of-the-osdf-origin-to-pelican-for-collaborations">Uprade of the OSDF origin to Pelican for Collaborations<a class="headerlink" href="#uprade-of-the-osdf-origin-to-pelican-for-collaborations" title="Permanent link">&para;</a></h2>
<p>The OSG Collab AP had an <strong>OSDF</strong> door deployed on the access point (ap23.uc.osg-htc.org) that provided users with authenticated access to a Ceph cluster, that provides high capacity storage to shared project directories. HTCondor jobs running in the OSPool at remote <strong>Execution Points (EPs)</strong> can access the filesystem either via a client tool or via an HTCondor plugin that is invoked in their submit scripts. The storage is also mounted on the AP at /ospool/uc-shared/project. </p>
<p>On <strong>11/21/2024</strong>, OSG/PATh staff migrated the OSDF door from the OSG Collab AP to a separate infrastructure to allow upgrading the origin to the Pelican Platform (https://pelicanplatform.org/) and provide shared project access to users at other APs (ap20.uc.osg-htc.org and ap21.uc.osg-htc.org).</p>
<p>The migration should have been transparent to the OSG Collab AP. If using the HTCondor plugin, no changes are needed in your submission scripts. For reference, the general purpose documentation found here (https://portal.osg-htc.org/documentation/htc_workloads/managing_data/osdf/) is also applicable to the users of the OSG Collab AP. It describes using the HTCondor plugin to move data to and from the OSDF/Pelican Origin.
In a nutshell then:</p>
<ol>
<li>
<p>Use this for your job running on the OSPool to a file from your project directory:</p>
<p>transfer_input_files = osdf:///ospool/apXX/data/<username>/<file>
5. Use this for your jobs running on the OSPool to write a file into your project directory:</p>
<p>OSDF_LOCATION = osdf:///ospool/
transfer_input_files = $(OSDF_LOCATION)/<file></p>
<p>Include the following in your submit script for an OSPool job read a file from your project directory at the EP:</p>
<p>transfer_input_files = osdf:///ospool/uc-shared/project/<your_project>/<file>
5. Include the folloow in your submit script for an OSPool job to write a file in your project directory from an EP</p>
<p>OSDF_LOCATION = osdf:///ospool/uc-shared/project/<your_project>/<file>
transfer_input_files = $(OSDF_LOCATION)/<file>
The upgrade to the Pelican platform, which uses federated urls for the origin, still keeps the same prefix <em>osdf://</em>. After the upgrade, <em>osdf://</em> simply points to <em>pelican://osg-htc.org/</em> </p>
</li>
</ol>
<p>If you are using a client in your runtime script at the EP, then the previous tool, <strong>stashcp</strong>, will continue to work as long as the version is &gt; 6.12 except recursive access (<em>-r</em> flag). Thefore, we recommend that groups migrate to using the pelican client instead. </p>
<p>The pelican equivalent of the stashcp client read command: <code>stashcp -d osdf:///ospool/uc-shared/project/&lt;your_project&gt;/&lt;file&gt;</code> is <code>pelican object get -d osdf:///ospool/uc-shared/project/xenon/ppaschos/testfile .</code>
Similarly, the pelican equivalent of the stashcp write comamnd, <code>stashcp -d &lt;file&gt; osdf:///ospool/uc-shared/project/&lt;your_project/&lt;file&gt;</code> is <code>pelican object put -d &lt;file&gt; osdf:///ospool//uc-shared/project/&lt;your_project/&lt;file&gt;</code></p>
<p>If your container image was built using one of the OSG base environments, then the pelican client tool is already included. The pelican client can be installed from the osg repo with <code>dnf install pelican</code>. Refer to this documentation (https://osg-htc.org/docs/common/yum/) to add the osg repository.</p>



Expand Down
2 changes: 1 addition & 1 deletion search/search_index.json

Large diffs are not rendered by default.

Binary file modified sitemap.xml.gz
Binary file not shown.

0 comments on commit d336d96

Please sign in to comment.