diff --git a/docs/filex_csi_driver/deployment.md b/docs/filex_csi_driver/deployment.md
new file mode 100644
index 00000000..64961b85
--- /dev/null
+++ b/docs/filex_csi_driver/deployment.md
@@ -0,0 +1,101 @@
+# Overview
+
+The HPE GreenLake for File Storage CSI Driver is deployed by using industry standard means, either a Helm chart or an Operator.
+
+[TOC]
+
+## Helm
+
+[Helm](https://helm.sh) is the package manager for Kubernetes. Software is being delivered in a format designated as a "chart". Helm is a [standalone CLI](https://helm.sh/docs/intro/install/) that interacts with the Kubernetes API server using your `KUBECONFIG` file.
+
+The official Helm chart for the HPE GreenLake for File Storage CSI Driver is hosted on [Artifact Hub](https://artifacthub.io/packages/helm/hpe-storage/hpe-greenlake-file-csi-driver). In an effort to avoid duplicate documentation, please see the chart for instructions on how to deploy the CSI driver using Helm.
+
+- Go to the chart on [Artifact Hub](https://artifacthub.io/packages/helm/hpe-storage/hpe-greenlake-file-csi-driver).
+
+!!! note
+ It's possible to use the HPE CSI Driver for Kubernetes steps for v2.4.2 or later to mirror the required images to an internal registry for installing into an [air-gapped environment](../csi_driver/deployment.md#helm_for_air-gapped_environments).
+
+## Operator
+
+The [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) is based on the idea that software should be instantiated and run with a set of custom controllers in Kubernetes. It creates a native experience for any software running on Kubernetes.
+
+### Red Hat OpenShift Container Platform
+
+
+During the beta, it's only possible to sideload the HPE GreenLake for File Storage CSI Operator using the Operator SDK.
+
+The installation procedures assumes the "hpe-storage" `Namespace` exists:
+
+```text
+oc create ns hpe-storage
+```
+
+
First, deploy or [download]({{ config.site_url}}partners/redhat_openshift/examples/scc/hpe-filex-csi-scc.yaml) the SCC:
+
+```text
+oc apply -f {{ config.site_url}}partners/redhat_openshift/examples/scc/hpe-filex-csi-scc.yaml
+```
+
+Install the Operator:
+
+```text
+operator-sdk run bundle --timeout 5m -n hpe-storage quay.io/hpestorage/filex-csi-driver-operator-bundle-ocp:v1.0.0-beta
+```
+
+The next step is to create a `HPEGreenLakeFileCSIDriver` resource, this can also be done in the OpenShift cluster console.
+
+```yaml fct_label="HPE GreenLake for File Storage CSI Operator v1.0.0-beta"
+# oc apply -f {{ config.site_url }}filex_csi_driver/examples/deployment/hpegreenlakefilecsidriver-v1.0.0-beta-sample.yaml
+{% include "examples/deployment/hpegreenlakefilecsidriver-v1.0.0-beta-sample.yaml" %}```
+
+For reference, this is how the Operator is uninstalled:
+
+```text
+operator-sdk cleanup hpe-filex-csi-operator -n hpe-storage
+```
+
+## Add a Storage Backend
+
+Once the CSI driver is deployed, two additional resources need to be created to get started with dynamic provisioning of persistent storage, a `Secret` and a `StorageClass`.
+
+!!! tip
+ Naming the `Secret` and `StorageClass` is entirely up to the user, however, to keep up with the examples on SCOD, it's highly recommended to use the names illustrated here.
+
+### Secret Parameters
+
+All parameters are mandatory and described below.
+
+| Parameter | Description |
+| ----------- | ----------- |
+| endpoint | This is the management hostname or IP address of the actual backend storage system. |
+| username | Backend storage system username with the correct privileges to perform storage management. |
+| password | Backend storage system password. |
+
+Example:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: hpe-file-backend
+ namespace: hpe-storage
+stringData:
+ endpoint: 192.168.1.1
+ username: my-csi-user
+ password: my-secret-password
+```
+
+Create the `Secret` using `kubectl`:
+
+```text
+kubectl create -f secret.yaml
+```
+
+!!! tip
+ In a real world scenario it's more practical to name the `Secret` something that makes sense for the organization. It could be the hostname of the backend or the role it carries, i.e "hpe-greenlake-file-sanjose-prod".
+
+Next step involves [creating a default StorageClass](using.md#base_storageclass_parameters).
diff --git a/docs/filex_csi_driver/examples/deployment/hpegreenlakefilecsidriver-v1.0.0-beta-sample.yaml b/docs/filex_csi_driver/examples/deployment/hpegreenlakefilecsidriver-v1.0.0-beta-sample.yaml
new file mode 100644
index 00000000..f40021ad
--- /dev/null
+++ b/docs/filex_csi_driver/examples/deployment/hpegreenlakefilecsidriver-v1.0.0-beta-sample.yaml
@@ -0,0 +1,44 @@
+apiVersion: storage.hpe.com/v1
+kind: HPEGreenLakeFileCSIDriver
+metadata:
+ name: hpegreenlakefilecsidriver-sample
+spec:
+ # Default values copied from /helm-charts/hpe-greenlake-file-csi-driver/values.yaml
+ controller:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ resources:
+ limits:
+ cpu: 2000m
+ memory: 1Gi
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ tolerations: []
+ disableNodeConformance: false
+ imagePullPolicy: IfNotPresent
+ images:
+ csiAttacher: registry.k8s.io/sig-storage/csi-attacher:v4.6.1
+ csiControllerDriver: quay.io/hpestorage/filex-csi-driver:v1.0.0-beta
+ csiNodeDriver: quay.io/hpestorage/filex-csi-driver:v1.0.0-beta
+ csiNodeDriverRegistrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1
+ csiNodeInit: quay.io/hpestorage/filex-csi-init:v1.0.0-beta
+ csiProvisioner: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1
+ csiResizer: registry.k8s.io/sig-storage/csi-resizer:v1.11.1
+ csiSnapshotter: registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1
+ kubeletRootDir: /var/lib/kubelet
+ node:
+ affinity: {}
+ labels: {}
+ nodeSelector: {}
+ resources:
+ limits:
+ cpu: 2000m
+ memory: 1Gi
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ tolerations: []
+
+
diff --git a/docs/filex_csi_driver/index.md b/docs/filex_csi_driver/index.md
new file mode 100644
index 00000000..7b19a27e
--- /dev/null
+++ b/docs/filex_csi_driver/index.md
@@ -0,0 +1,98 @@
+# Introduction
+
+A Container Storage Interface ([CSI](https://github.com/container-storage-interface/spec)) Driver for Kubernetes. The HPE GreenLake for File Storage CSI Driver perform data management operations on storage resources.
+
+## Table of Contents
+
+[TOC]
+
+## Features and Capabilities
+
+Below is the official table for CSI features we track and deem readily available for use after we've officially tested and validated it in the [platform matrix](#compatibility_and_support).
+
+| Feature | K8s maturity | Since K8s version | HPE GreenLake for File Storage CSI Driver |
+|---------------------------|-------------------|-------------------|-------------------------------------------|
+| Dynamic Provisioning | GA | 1.13 | 1.0.0 |
+| Volume Expansion | GA | 1.24 | 1.0.0 |
+| Volume Snapshots | GA | 1.20 | 1.0.0 |
+| PVC Data Source | GA | 1.18 | 1.0.0 |
+| Generic Ephemeral Volumes | GA | 1.23 | 1.0.0 |
+
+!!! tip
+ Familiarize yourself with the basic requirements below for running the CSI driver on your Kubernetes cluster. It's then highly recommended to continue installing the CSI driver with either a [Helm chart](deployment.md#helm) or an [Operator](deployment.md#operator).
+
+## Compatibility and Support
+
+These are the combinations HPE has tested and can provide official support services around for each of the CSI driver releases.
+
+!!! caution "Disclaimer"
+ The HPE Greenlake for File Storage CSI Driver is currently **NOT** supported by HPE and is considered beta software.
+
+
+#### HPE GreenLake for File Storage CSI Driver v1.0.0-beta
+
+Release highlights:
+
+* Initial beta release
+
+
+
+ Kubernetes |
+ 1.28-1.311 |
+
+
+ Helm Chart |
+ v1.0.0-beta on ArtifactHub |
+
+
+
+ Worker OS |
+
+ Red Hat Enterprise Linux2 7.x, 8.x, 9.x, Red Hat CoreOS 4.14-4.16
+ Ubuntu 16.04, 18.04, 20.04, 22.04, 24.04
+ SUSE Linux Enterprise Server 15 SP4, SP5, SP6 and SLE Micro4 equivalents
+ |
+
+ Platforms3 |
+
+ HPE GreenLake for File Storage MP OS 1.2 or later
+ |
+
+
+ Data Protocols |
+ NFSv3 and NFSv4.1 |
+
+
+
+
+
+ 1 = For HPE Ezmeral Runtime Enterprise, SUSE Rancher, Mirantis Kubernetes Engine and others; Kubernetes clusters must be deployed within the currently supported range of "Worker OS" platforms listed in the above table. See [partner ecosystems](../partners) for other variations. Lowest tested and known working version is Kubernetes 1.21.
+ 2 = The HPE CSI Driver will recognize CentOS, AlmaLinux and Rocky Linux as RHEL derives and they are supported by HPE. While RHEL 7 and its derives will work, the host OS have been EOL'd and support is limited.
+ 3 = Learn about each data platform's team [support commitment](../legal/support/index.md).
+ 4 = SLE Micro nodes may need to be conformed manually, run `transactional-update -n pkg install nfs-client` and reboot if the CSI node driver doesn't start.
+
+
+
+## Known Limitations
+
+* Always check with the Kubernetes vendor distribution which CSI features are available for use and supported by the vendor.
+* Inline Ephemeral Volumes are currently not supported. Use Generic Ephemeral Volumes instead as a workaround.
diff --git a/docs/filex_csi_driver/using.md b/docs/filex_csi_driver/using.md
new file mode 100644
index 00000000..aee28dd3
--- /dev/null
+++ b/docs/filex_csi_driver/using.md
@@ -0,0 +1,162 @@
+# Overview
+
+At this point the CSI driver should be installed and configured.
+
+!!! important
+ Most examples below assumes there's a `Secret` named "hpe-file-backend" in the "hpe-storage" `Namespace`. Learn how to add `Secrets` in the [Deployment section](deployment.md#add_a_storage_backend).
+
+[TOC]
+
+## PVC Access Modes
+
+The HPE GreenLake File Storage CSI Driver is primarily a `ReadWriteMany` (RWX) CSI implementation for file based storage. The CSI driver also supports `ReadWriteOnce` (RWO) and `ReadOnlyMany` (ROX).
+
+| Access Mode | Abbreviation | Use Case |
+| ---------------- | ------------ | -------- |
+| ReadWriteOnce | RWO | For high performance `Pods` where access to the PVC is exclusive to one host at a time. |
+| ReadWriteOncePod | RWOP | Exclusive access by a single `Pod`. Not currently supported by the CSI driver. |
+| ReadWriteMany | RWX | For shared filesystems where multiple `Pods` in the same `Namespace` need simultaneous access to a PVC across multiple nodes. |
+| ReadOnlyMany | ROX | Read-only representation of RWX. |
+
+!!! seealso "ReadWriteOnce and access by multiple Pods"
+ `Pods` that require access to the same "ReadWriteOnce" (RWO) PVC needs to reside on the same node and `Namespace` by using [selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [affinity scheduling rules](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) applied when deployed. If not configured correctly, the `Pod` will fail to start and will throw a "Multi-Attach" error in the event log if the PVC is already attached to a `Pod` that has been scheduled on a different node within the cluster.
+
+## Enabling CSI Snapshots
+
+Support for `VolumeSnapshotClasses` and `VolumeSnapshots` is available from Kubernetes 1.17+. The snapshot CRDs and the common snapshot controller needs to be installed manually. As per Kubernetes TAG Storage, these should not be installed as part of a CSI driver and should be deployed by the Kubernetes cluster vendor or user.
+
+Ensure the snapshot CRDs and common snapshot controller hasn't been installed already.
+
+```text
+kubectl get crd volumesnapshots.snapshot.storage.k8s.io \
+ volumesnapshotcontents.snapshot.storage.k8s.io \
+ volumesnapshotclasses.snapshot.storage.k8s.io
+```
+
+Vendors may package, name and deploy the common snapshot controller using their own naming conventions. Run the command below and look for workload names that contain "snapshot".
+
+```text
+kubectl get sts,deploy -A
+```
+
+If no prior CRDs or controllers exist, install the snapshot CRDs and common snapshot controller (once per Kubernetes cluster, independent of any CSI drivers).
+
+```text fct_label="HPE GreenLake for File Storage CSI Driver v1.0.0-beta"
+# Kubernetes 1.28-1.31
+git clone https://github.com/kubernetes-csi/external-snapshotter
+cd external-snapshotter
+git checkout tags/v8.0.1 -b hpe-greenlake-for-file-csi-driver-v1.0.0-beta
+kubectl kustomize client/config/crd | kubectl create -f-
+kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f-
+```
+
+!!! tip
+ The [provisioning](#provisioning_concepts) section contains examples on how to create a `VolumeSnapshotClass` and `VolumeSnapshot` objects.
+
+## Base StorageClass Parameters
+
+This serve as a base `StorageClass` using the most common scenario.
+
+```yaml
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: hpe-file-standard
+ annotations:
+ storageclass.kubernetes.io/is-default-class: "true"
+provisioner: filex.csi.hpe.com
+parameters:
+ csi.storage.k8s.io/provisioner-secret-name: hpe-file-backend
+ csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-publish-secret-name: hpe-file-backend
+ csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/node-publish-secret-name: hpe-file-backend
+ csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
+ csi.storage.k8s.io/controller-expand-secret-name: hpe-file-backend
+ csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
+ root_export: /my-volumes
+ view_policy: my-view-policy-1
+ vip_pool_name: my-pool-1
+reclaimPolicy: Delete
+volumeBindingMode: Immediate
+allowVolumeExpansion: true
+```
+
+!!! important "Important"
+ Replace "hpe-file-backend" with a `Secret` relevant to the backend being referenced. See [Deployment](deployment.md#secret_parameters) on how to create a `Secret`.
+
+HPE GreenLake for File Storage CSI Driver `StorageClass` parameters.
+
+| Parameter | Required | String | Description |
+| ---------------------- | -------- | -------- | ----------- |
+| root_export | Yes | Text | Folder on the appliance to create new `PersistentVolumes` in. |
+| view_policy | Yes | Text | Existing View Policy on the appliance. |
+| vip_pool_name | No* | Text | Existing VIP Pool on the appliance. |
+| vip_pool_fqdn | No* | Text | Existing DNS name with multiple `A` records that maps to IP addresses in a VIP Pool. |
+| use_local_ip_for_mount | No | Text | Hard coded IP address for the NFS server. Has no effect if vip_pool_name or vip_pool_fqdn is set. |
+| qos_policy | No | Text | Existing QoS Policy on the appliance. |
+
+* = Mutually exclusive.
+
+!!! tip
+ Guidance on how to create the necessary resources such as a VIP Pool, View Policy, and QoS Policy on the appliance, check out the official [HPE GreenLake for File Storage: Cluster Administrator Guide](https://support.hpe.com/hpesc/public/docDisplay?docId=sd00002658en_us)
+
+### NFS Protocol Version
+
+The CSI driver will by default mount NFS exports with version 3. To use version 4.1, add `nfsvers=4` to "mountOptions" in the `StorageClass`.
+
+```yaml
+mountOptions:
+ - nfsvers=4
+```
+
+## Provisioning Concepts
+
+These instructions are provided as an example on how to use common Kubernetes resources with the CSI driver.
+
+- [Create a PersistentVolumeClaim from a StorageClass](#create_a_persistentvolumeclaim_from_a_storageclass)
+- [Using CSI Snapshots](#using_csi_snapshots)
+- [Expanding PVCs](#expanding_pvcs)
+
+!!! tip "New to Kubernetes?"
+ There's a basic tutorial of how dynamic provisioning of persistent storage on Kubernetes works in the [Video Gallery](../learn/video_gallery/index.md#dynamic_provisioning_of_persistent_storage_on_kubernetes).
+
+### Create a PersistentVolumeClaim from a StorageClass
+
+The steps in the HPE CSI Driver for Kubernetes section of SCOD outlines the basic concepts of [creating a PVC from a `StorageClass`](../csi_driver/using.md#create_a_persistentvolumeclaim_from_a_storageclass). Skip the steps creating a HPE CSI Driver for Kubernetes `StorageClass`.
+
+### Using CSI Snapshots
+
+CSI introduces snapshots as native objects in Kubernetes that allows end-users to provision `VolumeSnapshot` objects from an existing `PersistentVolumeClaim`. New PVCs may then be created using the snapshot as a source.
+
+!!! tip
+ Ensure [CSI snapshots are enabled](#enabling_csi_snapshots).
+
There's a [tutorial in the Video Gallery](../learn/video_gallery/index.md#using_the_hpe_csi_driver_to_create_csi_snapshots_and_clones) on how to use CSI snapshots and clones.
+
+Start by creating a `VolumeSnapshotClass` referencing the `Secret` and defining additional snapshot parameters.
+
+```yaml
+apiVersion: snapshot.storage.k8s.io/v1
+driver: filex.csi.hpe.com
+deletionPolicy: Delete
+kind: VolumeSnapshotClass
+metadata:
+ name: hpe-file-snapshot
+ annotations:
+ snapshot.storage.kubernetes.io/is-default-class: "true"
+parameters:
+ csi.storage.k8s.io/snapshotter-secret-name: hpe-file-backend
+ csi.storage.k8s.io/snapshotter-secret-namespace: hpe-storage
+ csi.storage.k8s.io/snapshotter-list-secret-name: hpe-file-backend
+ csi.storage.k8s.io/snapshotter-list-secret-namespace: hpe-storage
+```
+
+Once a `VolumeSnapshotClass` has been created, follow the steps outlined in the HPE CSI Driver section for [using CSI snapshots](../csi_driver/using.md#using_csi_snapshots).
+
+### Expanding PVCs
+
+Instructions on how to expand an existing PVC is available in the HPE CSI Driver section for [expanding PVCs](../csi_driver/using.md#expanding_pvcs).
+
+## Further Reading
+
+The [official Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/volumes/) contains comprehensive documentation on how to markup `PersistentVolumeClaim` and `StorageClass` resources to tweak certain behaviors. Including `volumeBindingMode` and `mountOptions`.
diff --git a/docs/partners/redhat_openshift/examples/scc/hpe-filex-csi-scc.yaml b/docs/partners/redhat_openshift/examples/scc/hpe-filex-csi-scc.yaml
new file mode 100644
index 00000000..ca79b730
--- /dev/null
+++ b/docs/partners/redhat_openshift/examples/scc/hpe-filex-csi-scc.yaml
@@ -0,0 +1,55 @@
+---
+kind: SecurityContextConstraints
+apiVersion: security.openshift.io/v1
+metadata:
+ name: hpe-filex-csi-controller-scc
+allowHostDirVolumePlugin: true
+allowHostIPC: true
+allowHostNetwork: true
+allowHostPID: true
+allowHostPorts: true
+readOnlyRootFilesystem: false
+requiredDropCapabilities: []
+runAsUser:
+ type: RunAsAny
+seLinuxContext:
+ type: RunAsAny
+users:
+- system:serviceaccount:hpe-storage:hpe-filex-csi-controller-sa
+volumes:
+- hostPath
+- emptyDir
+- projected
+---
+kind: SecurityContextConstraints
+apiVersion: security.openshift.io/v1
+metadata:
+ name: hpe-filex-csi-node-scc
+allowHostDirVolumePlugin: true
+allowHostIPC: true
+allowHostNetwork: true
+allowHostPID: true
+allowHostPorts: true
+allowPrivilegeEscalation: true
+allowPrivilegedContainer: true
+allowedCapabilities:
+- SYS_ADMIN
+defaultAddCapabilities: []
+fsGroup:
+ type: RunAsAny
+groups: []
+priority:
+readOnlyRootFilesystem: false
+requiredDropCapabilities: []
+runAsUser:
+ type: RunAsAny
+seLinuxContext:
+ type: RunAsAny
+supplementalGroups:
+ type: RunAsAny
+users:
+- system:serviceaccount:hpe-storage:hpe-filex-csi-node-sa
+volumes:
+- emptyDir
+- hostPath
+- projected
diff --git a/mkdocs.yml b/mkdocs.yml
index 68847614..83f7e9a6 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -50,6 +50,10 @@ nav:
- CONTAINER STORAGE PROVIDERS:
- 'HPE Alletra 5000/6000 and Nimble': 'container_storage_provider/hpe_alletra_6000/index.md'
- 'HPE Alletra Storage MP and Alletra 9000/Primera/3PAR': 'container_storage_provider/hpe_alletra_storage_mp/index.md'
+ - HPE GREENLAKE FOR FILE STORAGE:
+ - 'Overview': 'filex_csi_driver/index.md'
+ - 'Deployment': 'filex_csi_driver/deployment.md'
+ - 'Using': 'filex_csi_driver/using.md'
# - WORKLOAD BLUEPRINTS:
# - 'Running MongoDB on HPE Storage': 'workloads/mongodb/index.md'
# - 'Hybrid Cloud CI/CD pipelines': 'workloads/mongodb/hybrid_cloud_cicd.md'