<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>ocp | Spagno's Blog</title><link>/category/ocp/</link><atom:link href="/category/ocp/index.xml" rel="self" type="application/rss+xml"/><description>ocp</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Tue, 28 Jan 2020 00:00:00 +0000</lastBuildDate><item><title>OCS 4.2 in OCP 4.2.14 - UPI installation in RHV</title><link>/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/</link><pubDate>Tue, 28 Jan 2020 00:00:00 +0000</pubDate><guid>/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/</guid><description>&lt;p>When OCS 4.2 GA was released days ago, I was thrilled to finally test and deploy it in my lab. I read the documentation and saw that only vSphere and AWS installations are currently supported. My lab is installed in an RHV environment following the UPI Bare Metal documentation so, in the beginning, I was a bit disappointed. I realized that it could be an interesting challenge to find a different way to use it and, well, I found it during my day by day late night fun. &lt;strong>All the following procedures are unsupported&lt;/strong>.&lt;/p>
&lt;h2>Table of Contents&lt;/h2>
&lt;nav id="TableOfContents">
&lt;ul>
&lt;li>&lt;a href="#prerequisites">Prerequisites&lt;/a>&lt;/li>
&lt;li>&lt;a href="#issues">Issues&lt;/a>&lt;/li>
&lt;li>&lt;a href="#use-case-scenario">Use case scenario&lt;/a>&lt;/li>
&lt;li>&lt;a href="#challenges">Challenges&lt;/a>&lt;/li>
&lt;li>&lt;a href="#solutions">Solutions&lt;/a>&lt;/li>
&lt;li>&lt;a href="#procedures">Procedures&lt;/a>&lt;/li>
&lt;li>&lt;a href="#other-scenario">Other Scenario&lt;/a>&lt;/li>
&lt;li>&lt;a href="#limits-and-requests">Limits and Requests&lt;/a>&lt;/li>
&lt;li>&lt;a href="#conclusions">Conclusions&lt;/a>&lt;/li>
&lt;li>&lt;a href="#updates">UPDATES&lt;/a>&lt;/li>
&lt;/ul>
&lt;/nav>
&lt;h2 id="prerequisites">Prerequisites&lt;/h2>
&lt;ul>
&lt;li>An OCP 4.2.x cluster installed (the current latest version is 4.2.14)&lt;/li>
&lt;li>The possibility to create new local disks inside the VMs (if you are using a virtualized environment) or servers with disks that can be used&lt;/li>
&lt;/ul>
&lt;h2 id="issues">Issues&lt;/h2>
&lt;p>The official OCS 4.2 installation in vSphere requires a minimum of 3 nodes which use 2TB volume each (a PVC using the default &amp;ldquo;thin&amp;rdquo; storage class) for the OSD volumes + 10GB for each mon POD (3 in total using always a PVC). It also requires 16 CPU and 64GB RAM for node.&lt;/p>
&lt;h2 id="use-case-scenario">Use case scenario&lt;/h2>
&lt;ul>
&lt;li>bare-metal installations&lt;/li>
&lt;li>vSphere cluster
&lt;ul>
&lt;li>without a shared datastore&lt;/li>
&lt;li>you don&amp;rsquo;t want to use the vSphere dynamic provisioner&lt;/li>
&lt;li>without enough space in the datastore&lt;/li>
&lt;li>without enough RAM or CPU&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>other virtualized installation (for example RHV which is the one used for this article)&lt;/li>
&lt;/ul>
&lt;h2 id="challenges">Challenges&lt;/h2>
&lt;ul>
&lt;li>create a PVC using local disks&lt;/li>
&lt;li>change the default 2TB volumes size&lt;/li>
&lt;li>define a different StorageClass (without using a default one) for the mon PODs and the OSD volumes&lt;/li>
&lt;li>define different limits and requests per component&lt;/li>
&lt;/ul>
&lt;h2 id="solutions">Solutions&lt;/h2>
&lt;ul>
&lt;li>use the local storage operator&lt;/li>
&lt;li>create the ocs-storagecluster resource using a YAML file instead of the new interface. That means also add the labels to the worker nodes that are going to be used by OCS&lt;/li>
&lt;/ul>
&lt;h2 id="procedures">Procedures&lt;/h2>
&lt;p>Add the disks in the VMs. Add 2 disks for each node. 10GB disk for mon POD and 100GB disk for the OSD volume
&lt;figure id="figure-create-10gb-disk">
&lt;a data-fancybox="" href="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image01_huf91ece2f4f56397b784f69d5aef88774_132140_2000x2000_fit_lanczos_2.png" data-caption="Create 10GB disk">
&lt;img data-src="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image01_huf91ece2f4f56397b784f69d5aef88774_132140_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="932">
&lt;/a>
&lt;figcaption>
Create 10GB disk
&lt;/figcaption>
&lt;/figure>
&lt;figure id="figure-create-100gb-disk">
&lt;a data-fancybox="" href="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image02_huc0b12125c4704a3cf4df5a3a6e2d3533_134566_2000x2000_fit_lanczos_2.png" data-caption="Create 100GB disk">
&lt;img data-src="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image02_huc0b12125c4704a3cf4df5a3a6e2d3533_134566_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="938">
&lt;/a>
&lt;figcaption>
Create 100GB disk
&lt;/figcaption>
&lt;/figure>
Repeat for the other 2 nodes&lt;/p>
&lt;p>&lt;strong>The disks MUST be in the same order and have the same device name in all the nodes. For example, /dev/sdb MUST be the 10GB disk and /dev/sdc the 100GB disk in all the nodes.&lt;/strong>&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# for i in {1..3} ; do ssh core@worker-${i}.ocp42.ssa.mbu.labs.redhat.com lsblk | egrep &amp;quot;^sdb.*|sdc.*$&amp;quot; ; done
sdb      8:16   0   10G  0 disk
sdc      8:32   0  100G  0 disk
sdb      8:16   0   10G  0 disk
sdc      8:32   0  100G  0 disk
sdb      8:16   0   10G  0 disk
sdc      8:32   0  100G  0 disk
[root@utility ~]#
&lt;/code>&lt;/pre>
&lt;p>Install the Local Storage Operator. &lt;a href="https://docs.openshift.com/container-platform/4.2/storage/persistent-storage/persistent-storage-local.html#local-storage-install_persistent-storage-local" target="_blank" rel="noopener">Here&lt;/a> the official documentation&lt;/p>
&lt;p>Create the namespace&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc new-project local-storage‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>Then install the operator from the OperatorHub
&lt;figure id="figure-lso-operator">
&lt;a data-fancybox="" href="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image03_hudb332b59b427f6d6c4a634c5296f12d9_161464_2000x2000_fit_lanczos_2.png" data-caption="LSO Operator">
&lt;img data-src="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image03_hudb332b59b427f6d6c4a634c5296f12d9_161464_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="941">
&lt;/a>
&lt;figcaption>
LSO Operator
&lt;/figcaption>
&lt;/figure>
&lt;figure id="figure-subscribe">
&lt;a data-fancybox="" href="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image04_hu7d23991fb885c5b0959b1302607cc3fe_119153_2000x2000_fit_lanczos_2.png" data-caption="Subscribe">
&lt;img data-src="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image04_hu7d23991fb885c5b0959b1302607cc3fe_119153_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="939">
&lt;/a>
&lt;figcaption>
Subscribe
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>Wait for the operator POD up&amp;amp;running&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc get pod -n local-storage
NAME                                     READY   STATUS    RESTARTS   AGE
local-storage-operator-ccbb59b45-nn7ww   1/1     Running   0          57s
[root@utility ~]#
&lt;/code>&lt;/pre>
&lt;p>The Local Storage Operator works using the devices as reference. The LocalVolume resource scans the nodes which match the selector and creates a StorageClass for the device.&lt;/p>
&lt;p>&lt;strong>Do not use different StorageClass names for the same device.&lt;/strong>&lt;/p>
&lt;p>We need the &lt;strong>Filesystem&lt;/strong> type for these volumes. Prepare the LocalVolume YAML file to create the resource for the mon PODs which use /dev/sdb&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# cat &amp;lt;&amp;lt;EOF &amp;gt; local-storage-filesystem.yaml
apiVersion: &amp;quot;local.storage.openshift.io/v1&amp;quot;
kind: &amp;quot;LocalVolume&amp;quot;
metadata:
  name: &amp;quot;local-disks-fs&amp;quot;
  namespace: &amp;quot;local-storage&amp;quot;
spec:
  nodeSelector:
    nodeSelectorTerms:
    - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - worker-1.ocp42.ssa.mbu.labs.redhat.com
          - worker-2.ocp42.ssa.mbu.labs.redhat.com
          - worker-3.ocp42.ssa.mbu.labs.redhat.com
  storageClassDevices:
    - storageClassName: &amp;quot;local-sc&amp;quot;
      volumeMode: Filesystem
      devicePaths:
        - /dev/sdb
EOF
&lt;/code>&lt;/pre>
&lt;p>Then create the resource&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc create -f local-storage-filesystem.yaml
localvolume.local.storage.openshift.io/local-disks-fs created
[root@utility ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>Check if all the PODs are up&amp;amp;running and if the StorageClass and the PVs exist&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc get pod -n local-storage
NAME                                     READY   STATUS    RESTARTS   AGE
local-disks-fs-local-diskmaker-2bqw4     1/1     Running   0          106s
local-disks-fs-local-diskmaker-8w9rz     1/1     Running   0          106s
local-disks-fs-local-diskmaker-khhm5     1/1     Running   0          106s
local-disks-fs-local-provisioner-g5dgv   1/1     Running   0          106s
local-disks-fs-local-provisioner-hkj69   1/1     Running   0          106s
local-disks-fs-local-provisioner-vhpj8   1/1     Running   0          106s
local-storage-operator-ccbb59b45-nn7ww   1/1     Running   0          15m
[root@utility ~]# oc get sc
NAME       PROVISIONER                    AGE
local-sc   kubernetes.io/no-provisioner   109s
[root@utility ~]# oc get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
local-pv-68faed78   10Gi       RWO            Delete           Available           local-sc                84s
local-pv-780afdd6   10Gi       RWO            Delete           Available           local-sc                83s
local-pv-b640422f   10Gi       RWO            Delete           Available           local-sc                9s
[root@utility ~]#
&lt;/code>&lt;/pre>
&lt;p>The PVs were created.&lt;/p>
&lt;p>Prepare the LocalVolume YAML file to create the resource for the OSD volumes which use /dev/sdc&lt;/p>
&lt;p>We need the &lt;strong>Block&lt;/strong> type for these volumes.&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# cat &amp;lt;&amp;lt;EOF &amp;gt; local-storage-block.yaml
apiVersion: &amp;quot;local.storage.openshift.io/v1&amp;quot;
kind: &amp;quot;LocalVolume&amp;quot;
metadata:
  name: &amp;quot;local-disks&amp;quot;
  namespace: &amp;quot;local-storage&amp;quot;
spec:
  nodeSelector:
    nodeSelectorTerms:
    - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - worker-1.ocp42.ssa.mbu.labs.redhat.com
          - worker-2.ocp42.ssa.mbu.labs.redhat.com
          - worker-3.ocp42.ssa.mbu.labs.redhat.com
  storageClassDevices:
    - storageClassName: &amp;quot;localblock-sc&amp;quot;
      volumeMode: Block
      devicePaths:
        - /dev/sdc
EOF
&lt;/code>&lt;/pre>
&lt;p>Then create the resource&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc create -f local-storage-block.yaml
localvolume.local.storage.openshift.io/local-disks created
[root@utility ~]#
&lt;/code>&lt;/pre>
&lt;p>Check if all the PODs are up&amp;amp;running and if the StorageClass and the PVs exist&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc get pod -n local-storage
NAME                                     READY   STATUS    RESTARTS   AGE
local-disks-fs-local-diskmaker-2bqw4     1/1     Running   0          6m33s
local-disks-fs-local-diskmaker-8w9rz     1/1     Running   0          6m33s
local-disks-fs-local-diskmaker-khhm5     1/1     Running   0          6m33s
local-disks-fs-local-provisioner-g5dgv   1/1     Running   0          6m33s
local-disks-fs-local-provisioner-hkj69   1/1     Running   0          6m33s
local-disks-fs-local-provisioner-vhpj8   1/1     Running   0          6m33s
local-disks-local-diskmaker-6qpfx        1/1     Running   0          22s
local-disks-local-diskmaker-pw5ql        1/1     Running   0          22s
local-disks-local-diskmaker-rc5hr        1/1     Running   0          22s
local-disks-local-provisioner-9qprp      1/1     Running   0          22s
local-disks-local-provisioner-kkkcm      1/1     Running   0          22s
local-disks-local-provisioner-kxbnn      1/1     Running   0          22s
local-storage-operator-ccbb59b45-nn7ww   1/1     Running   0          19m
[root@utility ~]# oc get sc
NAME            PROVISIONER                    AGE
local-sc        kubernetes.io/no-provisioner   6m36s
localblock-sc   kubernetes.io/no-provisioner   25s
[root@utility ~]# oc get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS    REASON   AGE
local-pv-5c4e718c   100Gi      RWO            Delete           Available           localblock-sc            10s
local-pv-68faed78   10Gi       RWO            Delete           Available           local-sc                 6m13s
local-pv-6a58375e   100Gi      RWO            Delete           Available           localblock-sc            10s
local-pv-780afdd6   10Gi       RWO            Delete           Available           local-sc                 6m12s
local-pv-b640422f   10Gi       RWO            Delete           Available           local-sc                 4m58s
local-pv-d6db37fd   100Gi      RWO            Delete           Available           localblock-sc            5s
[root@utility ~]#
&lt;/code>&lt;/pre>
&lt;p>All the PVs were created.&lt;/p>
&lt;p>Install OCS 4.2. &lt;a href="https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.2/" target="_blank" rel="noopener">Here&lt;/a> the official documentation&lt;/p>
&lt;p>Create the namespace &amp;ldquo;&lt;strong>openshift-storage&lt;/strong>&amp;rdquo;&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# cat &amp;lt;&amp;lt;EOF &amp;gt; ocs-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: openshift-storage
  labels:
    openshift.io/cluster-monitoring: &amp;quot;true&amp;quot;
EOF
[root@utility ~]# oc create -f ocs-namespace.yaml
namespace/openshift-storage created
[root@utility ~]#
&lt;/code>&lt;/pre>
&lt;p>Add the labels to the workers&lt;/p>
&lt;pre>&lt;code>oc label node worker-1.ocp42.ssa.mbu.labs.redhat.com &amp;quot;cluster.ocs.openshift.io/openshift-storage=&amp;quot; --overwrite
oc label node worker-1.ocp42.ssa.mbu.labs.redhat.com &amp;quot;topology.rook.io/rack=rack0&amp;quot; --overwrite
oc label node worker-2.ocp42.ssa.mbu.labs.redhat.com &amp;quot;cluster.ocs.openshift.io/openshift-storage=&amp;quot; --overwrite
oc label node worker-2.ocp42.ssa.mbu.labs.redhat.com &amp;quot;topology.rook.io/rack=rack1&amp;quot; --overwrite
oc label node worker-3.ocp42.ssa.mbu.labs.redhat.com &amp;quot;cluster.ocs.openshift.io/openshift-storage=&amp;quot; --overwrite
oc label node worker-3.ocp42.ssa.mbu.labs.redhat.com &amp;quot;topology.rook.io/rack=rack3&amp;quot; --overwrite‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>Install the operator from the web interface
&lt;figure id="figure-ocs-operator">
&lt;a data-fancybox="" href="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image05_hucc463c5272acf47127084afaabeb40d0_224451_2000x2000_fit_lanczos_2.png" data-caption="OCS Operator">
&lt;img data-src="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image05_hucc463c5272acf47127084afaabeb40d0_224451_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="939">
&lt;/a>
&lt;figcaption>
OCS Operator
&lt;/figcaption>
&lt;/figure>
&lt;figure id="figure-subscribe">
&lt;a data-fancybox="" href="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image06_hub55208ebac7a3ca05a087e70f7c75d81_154917_2000x2000_fit_lanczos_2.png" data-caption="Subscribe">
&lt;img data-src="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image06_hub55208ebac7a3ca05a087e70f7c75d81_154917_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="938">
&lt;/a>
&lt;figcaption>
Subscribe
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>Check on the web interface if the operator is &lt;strong>Up to date&lt;/strong>
&lt;figure id="figure-installed-operators">
&lt;a data-fancybox="" href="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image07_hu0709278c7cd292a514da0199bd84d6c3_110741_2000x2000_fit_lanczos_2.png" data-caption="Installed Operators">
&lt;img data-src="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image07_hu0709278c7cd292a514da0199bd84d6c3_110741_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="939">
&lt;/a>
&lt;figcaption>
Installed Operators
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>And wait for the PODs up&amp;amp;running&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc get pod -n openshift-storage
NAME                                  READY   STATUS    RESTARTS   AGE
noobaa-operator-85d86479fc-n8vp5      1/1     Running   0          106s
ocs-operator-65cf57b98b-rk48c         1/1     Running   0          106s
rook-ceph-operator-59d78cf8bd-4zcsz   1/1     Running   0          106s
[root@utility ~]#
&lt;/code>&lt;/pre>
&lt;p>Create the OCS Cluster Service YAML file&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# cat &amp;lt;&amp;lt;EOF &amp;gt; ocs-cluster-service.yaml
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
  name: ocs-storagecluster
  namespace: openshift-storage
spec:
  manageNodes: false
  monPVCTemplate:
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: 'local-sc'
      volumeMode: Filesystem
  storageDeviceSets:
  - count: 1
    dataPVCTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 100Gi
        storageClassName: 'localblock-sc'
        volumeMode: Block
    name: ocs-deviceset
    placement: {}
    portable: true
    replica: 3
    resources: {}
EOF
&lt;/code>&lt;/pre>
&lt;p>You can notice the &amp;ldquo;&lt;strong>monPVCTemplate&lt;/strong>&amp;rdquo; section in which we define the StorageClass &amp;ldquo;local-sc&amp;rdquo; and in the section &amp;ldquo;&lt;strong>storageDeviceSets&lt;/strong>&amp;rdquo; the different storage sizes and the StorageClass &amp;ldquo;localblock-sc&amp;rdquo; used by OSD volumes.&lt;/p>
&lt;p>Now we can create the resource&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc create -f ocs-cluster-service.yaml
storagecluster.ocs.openshift.io/ocs-storagecluster created
[root@utility ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>During the creation of the resources, we can see how the PVCs created are bounded with the Local Storage PVs&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc get pvc -n openshift-storage
NAME              STATUS   VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS   AGE
rook-ceph-mon-a   Bound    local-pv-68faed78   10Gi       RWO            local-sc       13s
rook-ceph-mon-b   Bound    local-pv-b640422f   10Gi       RWO            local-sc       8s
rook-ceph-mon-c   Bound    local-pv-780afdd6   10Gi       RWO            local-sc       3s
[root@utility ~]# oc get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                               STORAGECLASS    REASON   AGE
local-pv-5c4e718c   100Gi      RWO            Delete           Available                                       localblock-sc            28m
local-pv-68faed78   10Gi       RWO            Delete           Bound       openshift-storage/rook-ceph-mon-a   local-sc                 34m
local-pv-6a58375e   100Gi      RWO            Delete           Available                                       localblock-sc            28m
local-pv-780afdd6   10Gi       RWO            Delete           Bound       openshift-storage/rook-ceph-mon-c   local-sc                 34m
local-pv-b640422f   10Gi       RWO            Delete           Bound       openshift-storage/rook-ceph-mon-b   local-sc                 33m
local-pv-d6db37fd   100Gi      RWO            Delete           Available                                       localblock-sc            28m
[root@utility ~]#
&lt;/code>&lt;/pre>
&lt;p>And now we can see the OSD PVCs and the PVs bounded&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc get pvc -n openshift-storage
NAME                      STATUS   VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS    AGE
ocs-deviceset-0-0-7j2kj   Bound    local-pv-6a58375e   100Gi      RWO            localblock-sc   3s
ocs-deviceset-1-0-lmd97   Bound    local-pv-d6db37fd   100Gi      RWO            localblock-sc   3s
ocs-deviceset-2-0-dnfbd   Bound    local-pv-5c4e718c   100Gi      RWO            localblock-sc   3s‍‍‍‍‍
[root@utility ~]# oc get pv | grep localblock-sc
local-pv-5c4e718c                          100Gi      RWO            Delete           Bound    openshift-storage/ocs-deviceset-2-0-dnfbd   localblock-sc                          31m
local-pv-6a58375e                          100Gi      RWO            Delete           Bound    openshift-storage/ocs-deviceset-0-0-7j2kj   localblock-sc                          31m
local-pv-d6db37fd                          100Gi      RWO            Delete           Bound    openshift-storage/ocs-deviceset-1-0-lmd97   localblock-sc                          31m
[root@utility ~]#
&lt;/code>&lt;/pre>
&lt;p>This is the first PVC created inside the OCS cluster used by &lt;strong>noobaa&lt;/strong>&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc get pvc -n openshift-storage
NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
db-noobaa-core-0          Bound    pvc-d8dbb86f-3d83-11ea-ac51-001a4a16017d   50Gi       RWO            ocs-storagecluster-ceph-rbd   72s‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>Wait for all the PODs up&amp;amp;running&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc get pod -n openshift-storage
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-2qkl8                                            3/3     Running     0          5m31s
csi-cephfsplugin-4pbvl                                            3/3     Running     0          5m31s
csi-cephfsplugin-j8w82                                            3/3     Running     0          5m31s
csi-cephfsplugin-provisioner-647cd6996c-6mw9t                     4/4     Running     0          5m31s
csi-cephfsplugin-provisioner-647cd6996c-pbrxs                     4/4     Running     0          5m31s
csi-rbdplugin-9nj85                                               3/3     Running     0          5m31s
csi-rbdplugin-jmnqz                                               3/3     Running     0          5m31s
csi-rbdplugin-provisioner-6b8ff67dc4-jk5lm                        4/4     Running     0          5m31s
csi-rbdplugin-provisioner-6b8ff67dc4-rxjhq                        4/4     Running     0          5m31s
csi-rbdplugin-vrzjq                                               3/3     Running     0          5m31s
noobaa-core-0                                                     1/2     Running     0          2m34s
noobaa-operator-85d86479fc-n8vp5                                  1/1     Running     0          13m
ocs-operator-65cf57b98b-rk48c                                     0/1     Running     0          13m
rook-ceph-drain-canary-worker-1.ocp42.ssa.mbu.labs.redhat.w2cqv   1/1     Running     0          2m41s
rook-ceph-drain-canary-worker-2.ocp42.ssa.mbu.labs.redhat.whv6s   1/1     Running     0          2m40s
rook-ceph-drain-canary-worker-3.ocp42.ssa.mbu.labs.redhat.ll8gj   1/1     Running     0          2m40s
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-d7d64976d8cm7   1/1     Running     0          2m28s
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-864fdf78ppnpm   1/1     Running     0          2m27s
rook-ceph-mgr-a-5fd6f7578c-wbsb6                                  1/1     Running     0          3m24s
rook-ceph-mon-a-bffc546c8-vjrfb                                   1/1     Running     0          4m26s
rook-ceph-mon-b-8499dd679c-6pzm9                                  1/1     Running     0          4m11s
rook-ceph-mon-c-77cd5dd54-64z52                                   1/1     Running     0          3m46s
rook-ceph-operator-59d78cf8bd-4zcsz                               1/1     Running     0          13m
rook-ceph-osd-0-b46fbc7d7-hc2wz                                   1/1     Running     0          2m41s
rook-ceph-osd-1-648c5dc8d6-prwks                                  1/1     Running     0          2m40s
rook-ceph-osd-2-546d4d77fb-qb68j                                  1/1     Running     0          2m40s
rook-ceph-osd-prepare-ocs-deviceset-0-0-7j2kj-s72g4               0/1     Completed   0          2m56s
rook-ceph-osd-prepare-ocs-deviceset-1-0-lmd97-27chl               0/1     Completed   0          2m56s
rook-ceph-osd-prepare-ocs-deviceset-2-0-dnfbd-s7z8v               0/1     Completed   0          2m56s
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-d7b4b5b6hnpr   1/1     Running     0          2m12s
&lt;/code>&lt;/pre>
&lt;p>Our installation is now complete and OCS fully operative.&lt;/p>
&lt;p>Now we can browse the &lt;strong>noobaa management console&lt;/strong> (for now it only works in Chrome) and create a new user to test the S3 object storage
&lt;figure id="figure-user-page">
&lt;a data-fancybox="" href="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image08_huee4935129fd9497ca4136ecfea3ebb6b_81942_2000x2000_fit_lanczos_2.png" data-caption="User Page">
&lt;img data-src="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image08_huee4935129fd9497ca4136ecfea3ebb6b_81942_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="941">
&lt;/a>
&lt;figcaption>
User Page
&lt;/figcaption>
&lt;/figure>
&lt;figure id="figure-new-user">
&lt;a data-fancybox="" href="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image09_huba0232e7744d47b31452eb14a33320f7_111063_2000x2000_fit_lanczos_2.png" data-caption="New User">
&lt;img data-src="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image09_huba0232e7744d47b31452eb14a33320f7_111063_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="939">
&lt;/a>
&lt;figcaption>
New User
&lt;/figcaption>
&lt;/figure>
&lt;figure id="figure-permission">
&lt;a data-fancybox="" href="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image10_hue29fb602184f80148ff8ed706fd5e92f_119352_2000x2000_fit_lanczos_2.png" data-caption="Permission">
&lt;img data-src="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image10_hue29fb602184f80148ff8ed706fd5e92f_119352_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1918" height="944">
&lt;/a>
&lt;figcaption>
Permission
&lt;/figcaption>
&lt;/figure>
&lt;figure id="figure-secret-page">
&lt;a data-fancybox="" href="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image11_hu3c57ae365f6b98d4e8d2c035a0b259d4_113210_2000x2000_fit_lanczos_2.png" data-caption="Secret Page">
&lt;img data-src="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image11_hu3c57ae365f6b98d4e8d2c035a0b259d4_113210_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="940">
&lt;/a>
&lt;figcaption>
Secret Page
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>Get the endpoint for the S3 object server&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc get route s3 -o jsonpath='{.spec.host}' -n openshift-storage
s3-openshift-storage.apps.ocp42.ssa.mbu.labs.redhat.com‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>Test it with your preferred S3 client (I use Cyberduck in my windows desktop which I&amp;rsquo;m using to write this article)
&lt;figure id="figure-cyberduck-login">
&lt;a data-fancybox="" href="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image12_hud9d6b67c0766fae096774e5bafde7a32_59192_2000x2000_fit_lanczos_2.png" data-caption="Cyberduck Login">
&lt;img data-src="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image12_hud9d6b67c0766fae096774e5bafde7a32_59192_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1085" height="804">
&lt;/a>
&lt;figcaption>
Cyberduck Login
&lt;/figcaption>
&lt;/figure>
&lt;figure id="figure-list-buckets">
&lt;a data-fancybox="" href="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image13_hu3c3c67248804906ad6f824f39566b4e3_29810_2000x2000_fit_lanczos_2.png" data-caption="List Buckets">
&lt;img data-src="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image13_hu3c3c67248804906ad6f824f39566b4e3_29810_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1008" height="341">
&lt;/a>
&lt;figcaption>
List Buckets
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>Create something to check if you can write
&lt;figure id="figure-new-file">
&lt;a data-fancybox="" href="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image14_hu12e9d6be6db438fb633e7a56806ab772_27250_2000x2000_fit_lanczos_2.png" data-caption="New file">
&lt;img data-src="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image14_hu12e9d6be6db438fb633e7a56806ab772_27250_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="819" height="304">
&lt;/a>
&lt;figcaption>
New file
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>It works!&lt;/p>
&lt;p>Set the &lt;strong>ocs-storagecluster-cephfs&lt;/strong> StorageClass as the default one&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc patch storageclass ocs-storagecluster-cephfs -p '{&amp;quot;metadata&amp;quot;: {&amp;quot;annotations&amp;quot;:{&amp;quot;storageclass.kubernetes.io/is-default-class&amp;quot;:&amp;quot;true&amp;quot;}}}'
storageclass.storage.k8s.io/ocs-storagecluster-cephfs patched
[root@utility ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>Test the &lt;strong>ocs-storagecluster-cephfs&lt;/strong> StorageClass adding persistent storage to the registry&lt;/p>
&lt;pre>&lt;code> [root@utility ~]# oc edit configs.imageregistry.operator.openshift.io
storage:
  pvc:
    claim:‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>Check the PVC created and wait for the new POD up&amp;amp;running&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc get pvc -n openshift-image-registry
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                AGE
image-registry-storage   Bound    pvc-ba4a07c1-3d86-11ea-ad40-001a4a1601e7   100Gi      RWX            ocs-storagecluster-cephfs   12s
[root@utility ~]# oc get pod -n openshift-image-registry
NAME                                               READY   STATUS    RESTARTS   AGE
cluster-image-registry-operator-655fb7779f-pn7ms   2/2     Running   0          36h
image-registry-5bdf96556-98jbk                     1/1     Running   0          105s
node-ca-9gbxg                                      1/1     Running   1          35h
node-ca-fzcrm                                      1/1     Running   0          35h
node-ca-gr928                                      1/1     Running   1          35h
node-ca-jkfzf                                      1/1     Running   1          35h
node-ca-knlcj                                      1/1     Running   0          35h
node-ca-mb6zh                                      1/1     Running   0          35h
[root@utility ~]#
&lt;/code>&lt;/pre>
&lt;p>Test it in a new project &lt;strong>test&lt;/strong>&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc new-project test
Now using project &amp;quot;test&amp;quot; on server &amp;quot;https://api.ocp42.ssa.mbu.labs.redhat.com:6443&amp;quot;.
You can add applications to this project with the 'new-app' command. For example, try:
    oc new-app django-psql-example
to build a new example application in Python. Or use kubectl to deploy a simple Kubernetes application:
    kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
[root@utility ~]# podman pull alpine
Trying to pull docker.io/library/alpine...Getting image source signatures
Copying blob c9b1b535fdd9 doneCopying config e7d92cdc71 doneWriting manifest to image destination
Storing signaturese7d92cdc71feacf90708cb59182d0df1b911f8ae022d29e8e95d75ca6a99776a
[root@utility ~]# podman login -u $(oc whoami) -p $(oc whoami -t) $REGISTRY_URL --tls-verify=false
Login Succeeded!
[root@utility ~]# podman tag alpine $REGISTRY_URL/test/alpine
[root@utility ~]# podman push $REGISTRY_URL/test/alpine --tls-verify=false
Getting image source signatures
Copying blob 5216338b40a7 done
Copying config e7d92cdc71 done
Writing manifest to image destination
Storing signatures
[root@utility ~]# oc get is -n test
NAME     IMAGE REPOSITORY                                                                        TAGS     UPDATED
alpine   default-route-openshift-image-registry.apps.ocp42.ssa.mbu.labs.redhat.com/test/alpine   latest   3 minutes ago
[root@utility ~]#
&lt;/code>&lt;/pre>
&lt;p>The registry works!&lt;/p>
&lt;h2 id="other-scenario">Other Scenario&lt;/h2>
&lt;p>If your cluster is deployed in vSphere and uses the default &amp;ldquo;&lt;strong>thin&lt;/strong>&amp;rdquo; StorageClass but your datastore isn&amp;rsquo;t big enough, you can start from the OCS installation.
When it comes to creating the OCS Cluster Service, create a YAML file with your desired sizes and without storageClassName (it will use the default one).
You can also remove the &amp;ldquo;&lt;strong>monPVCTemplate&lt;/strong>&amp;rdquo; if you are not interested in changing the storage size.&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# cat &amp;lt;&amp;lt;EOF &amp;gt; ocs-cluster-service.yaml
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
  name: ocs-storagecluster
  namespace: openshift-storage
spec:
  manageNodes: false
  monPVCTemplate:
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: ''
      volumeMode: Filesystem
  storageDeviceSets:
  - count: 1
    dataPVCTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 100Gi
        storageClassName: ''
        volumeMode: Block
    name: ocs-deviceset
    placement: {}
    portable: true
    replica: 3
    resources: {}
EOF
&lt;/code>&lt;/pre>
&lt;h2 id="limits-and-requests">Limits and Requests&lt;/h2>
&lt;p>Limits and Requests, by default, are set like that&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc describe node worker-1.ocp42.ssa.mbu.labs.redhat.com
...
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
openshift-storage noobaa-core-0 4 (25%) 4 (25%) 8Gi (12%) 8Gi (12%) 13m
openshift-storage rook-ceph-mgr-a-676d4b4796-54mtk 1 (6%) 1 (6%) 3Gi (4%) 3Gi (4%) 12m
openshift-storage rook-ceph-mon-b-7d7747d8b4-k9txg 1 (6%) 1 (6%) 2Gi (3%) 2Gi (3%) 13m
openshift-storage rook-ceph-osd-1-854847fd4c-482bt 1 (6%) 2 (12%) 4Gi (6%) 8Gi (12%) 12m
...
&lt;/code>&lt;/pre>
&lt;p>We can create our new YAML file to change those settings in the &lt;strong>ocs-storagecluster&lt;/strong> StorageCluster resource&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# cat &amp;lt;&amp;lt;EOF &amp;gt; ocs-cluster-service-modified.yaml
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
  name: ocs-storagecluster
  namespace: openshift-storage
spec:
resources:
mon:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
mgr:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
noobaa-core:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
noobaa-db:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 1Gi
  manageNodes: false
  monPVCTemplate:
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: 'local-sc'
      volumeMode: Filesystem
  storageDeviceSets:
  - count: 1
    dataPVCTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 100Gi
        storageClassName: 'localblock-sc'
        volumeMode: Block
    name: ocs-deviceset
    placement: {}
    portable: true
    replica: 3
    resources:
limits:
cpu: 1
memory: 4Gi
requests:
cpu: 1
memory: 4Gi
EOF
&lt;/code>&lt;/pre>
&lt;p>And apply&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc apply -f ocs-cluster-service-modified.yaml
Warning: oc apply should be used on resource created by either oc create --save-config or oc apply
storagecluster.ocs.openshift.io/ocs-storagecluster configured
&lt;/code>&lt;/pre>
&lt;p>We have to wait for the operator which reads the new configs and applies them&lt;/p>
&lt;pre>&lt;code>[root@utility ~]# oc describe node worker-1.ocp42.ssa.mbu.labs.redhat.com
...
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
openshift-storage noobaa-core-0 2 (12%) 2 (12%) 2Gi (3%) 2Gi (3%) 23s
openshift-storage rook-ceph-mgr-a-54f87f84fb-pm4rn 1 (6%) 1 (6%) 1Gi (1%) 1Gi (1%) 56s
openshift-storage rook-ceph-mon-b-854f549cd4-bgdb6 1 (6%) 1 (6%) 1Gi (1%) 1Gi (1%) 46s
openshift-storage rook-ceph-osd-1-ff56d545c-p7hvn 1 (6%) 1 (6%) 4Gi (6%) 4Gi (6%) 50s
...
&lt;/code>&lt;/pre>
&lt;p>And now we have our PODs with the new configurations applied.&lt;/p>
&lt;p>&lt;strong>The OSD PODs won&amp;rsquo;t start if you choose too low values.&lt;/strong>&lt;/p>
&lt;p>Sections:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>mon&lt;/strong> for &lt;strong>rook-ceph-mon&lt;/strong>&lt;/li>
&lt;li>&lt;strong>mgr&lt;/strong> for &lt;strong>rook-ceph-mgr&lt;/strong>&lt;/li>
&lt;li>&lt;strong>noobaa-core&lt;/strong> and &lt;strong>noobaa-db&lt;/strong> for the 2 containers in the pod &lt;strong>noobaa-core-0&lt;/strong>&lt;/li>
&lt;li>&lt;strong>mds&lt;/strong> for &lt;strong>rook-ceph-mds-ocs-storagecluster-cephfilesystem&lt;/strong>&lt;/li>
&lt;li>&lt;strong>rgw&lt;/strong> for &lt;strong>rook-ceph-rgw-ocs-storagecluster-cephobjectstore&lt;/strong>&lt;/li>
&lt;li>the &lt;strong>resources&lt;/strong> section in the end for &lt;strong>rook-ceph-osd&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>rgw and mds sections work only the first time we create the resource.&lt;/strong>&lt;/p>
&lt;pre>&lt;code>---
spec:
resources:
mds:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 2
memory: 4Gi
rgw:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
---
&lt;/code>&lt;/pre>
&lt;h2 id="conclusions">Conclusions&lt;/h2>
&lt;p>Now you can enjoy your brand-new OCS 4.2 in OCP 4.2.x&lt;br>
Things changed if you think about OCS 3.x, for example, the use of the PVCs instead of using directly the disks attached and for now, there are a lot of limitations for sustainability and supportability reasons.&lt;br>
We will wait for a fully supported installation for these scenarios.&lt;/p>
&lt;h2 id="updates">UPDATES&lt;/h2>
&lt;ul>
&lt;li>The cluster used to write this article has been updated from 4.2.14 to 4.2.16 and then from 4.2.16 to 4.3.0.&lt;/li>
&lt;/ul>
&lt;p>The current OCS setup is still working
&lt;figure id="figure-upgrade">
&lt;a data-fancybox="" href="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image15_hub697660b15a4b1912b574e18f79c4cfc_119986_2000x2000_fit_lanczos_2.png" data-caption="Upgrade">
&lt;img data-src="/post/ocs-42-in-ocp-4214-upi-installation-in-rhv/images/image15_hub697660b15a4b1912b574e18f79c4cfc_119986_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="881">
&lt;/a>
&lt;figcaption>
Upgrade
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;ul>
&lt;li>Added Requests and Limits configurations.&lt;/li>
&lt;/ul></description></item><item><title>Heketi Integrated Metrics with Prometheus and Grafana in OCP 3.11</title><link>/post/heketi-integrated-metrics-with-prometheus-and-grafana-in-ocp-311/</link><pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate><guid>/post/heketi-integrated-metrics-with-prometheus-and-grafana-in-ocp-311/</guid><description>&lt;p>Since I started using OCP with GlusterFS one of the bigger blocker was the lack of metrics for GlusterFS. Now we have GlusterFS 3.4.0 with heketi 7 which ships the integrated metrics endpoint for Prometheus.
Searching in our documentation, I found &lt;a href="https://redhatstorage.redhat.com/category/architects/" title="https://redhatstorage.redhat.com/category/architects/" target="_blank" rel="noopener">Architects – Red Hat Storage&lt;/a> but it doesn&amp;rsquo;t work for OCP 3.11 because the entire Prometheus framework has changed.
I found &lt;a href="https://bugzilla.redhat.com/show_bug.cgi?id=1644665" title="https://bugzilla.redhat.com/show_bug.cgi?id=1644665" target="_blank" rel="noopener">https://bugzilla.redhat.com/show_bug.cgi?id=1644665&lt;/a>  which brought me to an internal Redhat document and from here I started ordering all the pieces of the puzzle.&lt;/p>
&lt;p>In my lab I installed OCP 3.11.43 with RHGS 3.4.0 with 2 separate GlusterFS&lt;/p>
&lt;ul>
&lt;li>glusterfs&lt;/li>
&lt;/ul>
&lt;p>to provision PVC for the apps&lt;/p>
&lt;ul>
&lt;li>glusterfs_registry&lt;/li>
&lt;/ul>
&lt;p>to provision PVC for the infrastructure components&lt;/p>
&lt;pre>&lt;code>[glusterfs]
ocp-node-gluster1.example.com glusterfs_devices='[ &amp;quot;/dev/sdc&amp;quot;, &amp;quot;/dev/sdd&amp;quot; ]'
ocp-node-gluster2.example.com glusterfs_devices='[ &amp;quot;/dev/sdc&amp;quot;, &amp;quot;/dev/sdd&amp;quot; ]'
ocp-node-gluster3.example.com glusterfs_devices='[ &amp;quot;/dev/sdc&amp;quot;, &amp;quot;/dev/sdd&amp;quot; ]'
[glusterfs_registry]
ocp-node-gluster4.example.com glusterfs_devices='[ &amp;quot;/dev/sdc&amp;quot;, &amp;quot;/dev/sdd&amp;quot; ]'
ocp-node-gluster5.example.com glusterfs_devices='[ &amp;quot;/dev/sdc&amp;quot;, &amp;quot;/dev/sdd&amp;quot; ]'
ocp-node-gluster6.example.com glusterfs_devices='[ &amp;quot;/dev/sdc&amp;quot;, &amp;quot;/dev/sdd&amp;quot; ]'‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>I checked the Heketi metrics endpoint&lt;/p>
&lt;pre>&lt;code>[root@ocp-master1 ~]# oc get svc -n ocs-infra
NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
heketi-db-registry-endpoints   ClusterIP   172.30.34.253   &amp;lt;none&amp;gt;        1/TCP      10h
heketi-registry                ClusterIP   172.30.17.135   &amp;lt;none&amp;gt;        8080/TCP   10h
[root@ocp-master1 ~]# curl 172.30.17.135:8080/metrics -s | head -n1
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
[root@ocp-master1 ~]# oc get svc -n ocs-app
NAME                          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
heketi-db-storage-endpoints   ClusterIP   172.30.227.21    &amp;lt;none&amp;gt;        1/TCP      10h
heketi-storage                ClusterIP   172.30.138.116   &amp;lt;none&amp;gt;        8080/TCP   10h
[root@ocp-master1 ~]# curl 172.30.138.116:8080/metrics -s | head -n1
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
[root@ocp-master1 ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>Prometheus uses servicemonitors, new resources introduced by the Prometheus Operator which describe the set of targets to be monitored in OCP 3.11 (more information about Prometheus Operator &lt;a href="https://coreos.com/operators/prometheus/docs/latest/user-guides/getting-started.html" target="_blank" rel="noopener">here&lt;/a>), so I had to create those objects:&lt;/p>
&lt;pre>&lt;code>[root@ocp-master1 ~]# cat heketi-infra-sm.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: heketi-infra
  labels:    k8s-app: heketi-infra
  namespace: openshift-monitoring
spec:
  endpoints:
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    interval: 30s
    port: heketi
    scheme: http
    targetPort: 0
  namespaceSelector:
    matchNames:
    - ocs-infra
  selector:
    matchLabels:
      heketi: registry-service
[root@ocp-master1 ~]# oc create -f heketi-infra-sm.yaml -n openshift-monitoring
servicemonitor.monitoring.coreos.com/heketi-infra created
[root@ocp-master1 ~]#
[root@ocp-master1 ~]# cat heketi-app-sm.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: heketi-app
  labels:    k8s-app: heketi-app
  namespace: openshift-monitoring
spec:
  endpoints:
  - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
    interval: 30s
    port: heketi
    scheme: http
    targetPort: 0
  namespaceSelector:
    matchNames:
    - ocs-app
  selector:
    matchLabels:
      heketi: storage-service‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
[root@ocp-master1 ~]# oc create -f heketi-app-sm.yaml -n openshift-monitoring
servicemonitor.monitoring.coreos.com/heketi-app created
[root@ocp-master1 ~]#
&lt;/code>&lt;/pre>
&lt;p>The two selectors at line had been found in the heketi svc:&lt;/p>
&lt;pre>&lt;code>[root@ocp-master1 ~]# oc project ocs-infra
Now using project &amp;quot;ocs-infra&amp;quot;
[root@ocp-master1 ~]# oc describe svc heketi-registry
Name:              heketi-registry
Namespace:         ocs-infra
Labels:            glusterfs=heketi-registry-service
                  heketi=registry-service
...‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
[root@ocp-master1 ~]# oc project ocs-app
Now using project &amp;quot;ocs-app&amp;quot;
[root@ocp-master1 ~]# oc describe svc heketi-storage
Name:              heketi-storage
Namespace:         ocs-app
Labels:            glusterfs=heketi-storage-service
                   heketi=storage-service
...‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>Final, add the cluster role to the prometheus-k8s service account:&lt;/p>
&lt;pre>&lt;code>[root@ocp-master1 ~]# oc adm policy add-cluster-role-to-user cluster-reader system:serviceaccount:openshift-monitoring:prometheus-k8s -n openshift-monitoring
cluster role &amp;quot;cluster-reader&amp;quot; added: &amp;quot;system:serviceaccount:openshift-monitoring:prometheus-k8s&amp;quot;
[root@ocp-master1 ~]#‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>After about 1 minute, Prometheus loaded the new servicemonitors:
&lt;figure id="figure-targets">
&lt;a data-fancybox="" href="/post/heketi-integrated-metrics-with-prometheus-and-grafana-in-ocp-311/images/image01_hubdc518e8460738861c6421d1e826cdb8_51129_2000x2000_fit_lanczos_2.png" data-caption="targets">
&lt;img data-src="/post/heketi-integrated-metrics-with-prometheus-and-grafana-in-ocp-311/images/image01_hubdc518e8460738861c6421d1e826cdb8_51129_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1918" height="858">
&lt;/a>
&lt;figcaption>
targets
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>In the Grafana shipped with OCP 3.11, to have admin privileges you MUST have an user &amp;ldquo;admin&amp;rdquo; with cluster-admin cluster role. I created the user (htpasswd Identity Provider):&lt;/p>
&lt;pre>&lt;code>[root@ocp-master1 ~]# htpasswd /etc/origin/master/htpasswd admin
New password:Re-type new password:
Updating password for user admin
[root@ocp-master1 ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>And added it to the cluster role:&lt;/p>
&lt;pre>&lt;code>[root@ocp-master1 ~]# oc adm policy add-cluster-role-to-user cluster-admin admin
cluster role &amp;quot;cluster-admin&amp;quot; added: &amp;quot;admin&amp;quot;
[root@ocp-master1 ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>In the previous gdrive document you can find this mail &lt;a href="http://post-office.corp.redhat.com/archives/sme-storage/2018-October/msg00388.html" title="http://post-office.corp.redhat.com/archives/sme-storage/2018-October/msg00388.html" target="_blank" rel="noopener">http://post-office.corp.redhat.com/archives/sme-storage/2018-October/msg00388.html&lt;/a>  which contains a Grafana Dashboard for these metrics. I added some variables to manage more than 3 nodes and both GlusterFS clusters. Finally the new &lt;a href="https://drive.google.com/file/d/1kXNCS56jiQ6hX3meyWME1jc7ZlPnOI_E/view?usp=sharing" target="_blank" rel="noopener">dashboard&lt;/a> was imported:
&lt;figure id="figure-grafana">
&lt;a data-fancybox="" href="/post/heketi-integrated-metrics-with-prometheus-and-grafana-in-ocp-311/images/image02_hu31329ceddc522bd8a7741e03f5d4c9c6_112885_2000x2000_fit_lanczos_2.png" data-caption="grafana">
&lt;img data-src="/post/heketi-integrated-metrics-with-prometheus-and-grafana-in-ocp-311/images/image02_hu31329ceddc522bd8a7741e03f5d4c9c6_112885_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1911" height="823">
&lt;/a>
&lt;figcaption>
grafana
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;h2 id="please-note">Please Note&lt;/h2>
&lt;p>&lt;strong>Grafana uses ephemeral storage: if the pod is destroyed you MUST re-import this dashboard.&lt;/strong>&lt;/p>
&lt;p>Enjoy your metrics!&lt;/p></description></item><item><title>Google OAuth as Identity Provider with Red Hat login in OCP 3.11</title><link>/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/</link><pubDate>Wed, 13 Mar 2019 00:00:00 +0000</pubDate><guid>/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/</guid><description>&lt;p>When I was in Red Hat, I needed to grant access to my lab to some of my colleagues.&lt;br>
The lab uses httpasswd IdentityProvider and It was really painful to add new users to the file each time.&lt;br>
So, an idea popped up: could I use the google oauth Identity Provider with our Red Hat login?&lt;br>
Well, it can be done! This is a detailed how-to&lt;/p>
&lt;p>I logged in &lt;a href="https://console.developers.google.com/apis/dashboard" title="https://console.developers.google.com/apis/dashboard" target="_blank" rel="noopener">https://console.developers.google.com/apis/dashboard&lt;/a>  with my Red Hat credentials.&lt;/p>
&lt;p>At the top of the page, click the select box next to the &lt;strong>google APIs&lt;/strong> logo
&lt;figure id="figure-google-apis">
&lt;a data-fancybox="" href="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image01_hu8b562bbacd33de561e2312c31dd6a7df_59958_2000x2000_fit_lanczos_2.png" data-caption="google apis">
&lt;img data-src="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image01_hu8b562bbacd33de561e2312c31dd6a7df_59958_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="1080">
&lt;/a>
&lt;figcaption>
google apis
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>Choose &lt;strong>REDHAT.COM&lt;/strong> in the &lt;strong>Select from&lt;/strong> box and then click &lt;strong>NEW PROJECT&lt;/strong>
&lt;figure id="figure-location">
&lt;a data-fancybox="" href="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image02_hub8b4af18d339ee45d3b7591cfb769979_99387_2000x2000_fit_lanczos_2.png" data-caption="location">
&lt;img data-src="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image02_hub8b4af18d339ee45d3b7591cfb769979_99387_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="1080">
&lt;/a>
&lt;figcaption>
location
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>Choose your &lt;strong>Project Name&lt;/strong> and be sure that the &lt;strong>Location&lt;/strong> is &lt;strong>redhat.com&lt;/strong>. Then click &lt;strong>CREATE&lt;/strong>
&lt;figure id="figure-new-project">
&lt;a data-fancybox="" href="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image01_hu8b562bbacd33de561e2312c31dd6a7df_59958_2000x2000_fit_lanczos_2.png" data-caption="new project">
&lt;img data-src="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image01_hu8b562bbacd33de561e2312c31dd6a7df_59958_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="1080">
&lt;/a>
&lt;figcaption>
new project
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>On the left, you&amp;rsquo;ll find the &lt;strong>credentials&lt;/strong> section: click on it
&lt;figure id="figure-credentials">
&lt;a data-fancybox="" href="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image03_hu1932725998a2900f0ac49b549d3c78de_70701_2000x2000_fit_lanczos_2.png" data-caption="credentials">
&lt;img data-src="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image03_hu1932725998a2900f0ac49b549d3c78de_70701_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="1080">
&lt;/a>
&lt;figcaption>
credentials
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>Under &lt;strong>credentials&lt;/strong>, click on the tab &lt;strong>OAuth Consent Screen&lt;/strong>
&lt;figure id="figure-oauth-consent-screen">
&lt;a data-fancybox="" href="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image04_hud4b539d1c646604f8b87321bbd04279a_72622_2000x2000_fit_lanczos_2.png" data-caption="oauth consent screen">
&lt;img data-src="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image04_hud4b539d1c646604f8b87321bbd04279a_72622_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="1080">
&lt;/a>
&lt;figcaption>
oauth consent screen
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>Now we have to configure the &lt;strong>Application type&lt;/strong> as &lt;strong>internal&lt;/strong> and add your ocp domain in &lt;strong>Authorized Domain&lt;/strong> and your &lt;strong>Application Name&lt;/strong>. Then click &lt;strong>save&lt;/strong> and you&amp;rsquo;ll be redirected in the &lt;strong>credentials&lt;/strong> configuration
&lt;figure id="figure-application-type">
&lt;a data-fancybox="" href="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image05_hua3f1e6835828f9913c5da4ff7e1cdd7d_81305_2000x2000_fit_lanczos_2.png" data-caption="application type">
&lt;img data-src="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image05_hua3f1e6835828f9913c5da4ff7e1cdd7d_81305_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="1080">
&lt;/a>
&lt;figcaption>
application type
&lt;/figcaption>
&lt;/figure>
&lt;figure id="figure-authorized-domain">
&lt;a data-fancybox="" href="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image06_hu03570c4226fe56686e2827b4ed5593e3_214824_2000x2000_fit_lanczos_2.png" data-caption="authorized domain">
&lt;img data-src="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image06_hu03570c4226fe56686e2827b4ed5593e3_214824_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="1053">
&lt;/a>
&lt;figcaption>
authorized domain
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>Click &lt;strong>Create credentials&lt;/strong> and select &lt;strong>Oauth Client ID&lt;/strong>
&lt;figure id="figure-create-credentials">
&lt;a data-fancybox="" href="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image07_hu576e1bcbcd94c0cada91b5c20e0aee07_102242_2000x2000_fit_lanczos_2.png" data-caption="create credentials">
&lt;img data-src="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image07_hu576e1bcbcd94c0cada91b5c20e0aee07_102242_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="1053">
&lt;/a>
&lt;figcaption>
create credentials
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>Select &lt;strong>Web application&lt;/strong> in &lt;strong>Application type&lt;/strong> and choose the &lt;strong>Name&lt;/strong>. In &lt;strong>Authorized JavaScript origins&lt;/strong> add the URI of your ocp webconsole. In &lt;strong>Authorized redirect URIs&lt;/strong> add your callback uri. In OCP 3.11 your callback uri should be: &lt;strong>&lt;code>https://&amp;lt;master&amp;gt;/oauth2callback/&amp;lt;identityProviderName&amp;gt;&lt;/code>&lt;/strong>. The &lt;strong>IdentityProviderName&lt;/strong> must have the same name as the one we&amp;rsquo;ll configure in OpenShift. Then click &lt;strong>create&lt;/strong>. A popup will be shown giving you the &lt;strong>client ID&lt;/strong> and the &lt;strong>client secret&lt;/strong>. Save that information because we&amp;rsquo;ll need them later to setup OpenShift
&lt;figure id="figure-web-application">
&lt;a data-fancybox="" href="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image08_hu4531b58442898cdc612b3daec369365b_128741_2000x2000_fit_lanczos_2.png" data-caption="web application">
&lt;img data-src="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image08_hu4531b58442898cdc612b3daec369365b_128741_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="1053">
&lt;/a>
&lt;figcaption>
web application
&lt;/figcaption>
&lt;/figure>
&lt;figure id="figure-credentials">
&lt;a data-fancybox="" href="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image09_hufd3f1eb3c71e1badc7cbd8ebfd343622_114304_2000x2000_fit_lanczos_2.png" data-caption="credentials">
&lt;img data-src="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image09_hufd3f1eb3c71e1badc7cbd8ebfd343622_114304_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="1053">
&lt;/a>
&lt;figcaption>
credentials
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>Now it&amp;rsquo;s time to configure our OpenShift.
The following procedure must be done in **ALL **the masters of the cluster.&lt;/p>
&lt;p>You must log in the master server and modify the &lt;strong>/etc/origin/master/master-config.yml&lt;/strong> file adding this snippet under the section &lt;strong>identityProviders&lt;/strong>&lt;/p>
&lt;pre>&lt;code>  - name: RedHat
    challenge: false
    login: true
    mappingMethod: claim
    provider:
      apiVersion: v1
      kind: GoogleIdentityProvider
      clientID: &amp;quot;xxx&amp;quot;
      clientSecret: &amp;quot;xxx&amp;quot;
      hostedDomain: &amp;quot;redhat.com&amp;quot;‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>&lt;strong>name&lt;/strong> must be the same as the &lt;strong>IdentityProviderName&lt;/strong> we have configured in the callback URI&lt;/p>
&lt;p>&lt;strong>clientID&lt;/strong> and &lt;strong>clientSecret&lt;/strong> are the info we got in the &lt;strong>credentials&lt;/strong> setup in google.&lt;/p>
&lt;p>After that, restart &lt;strong>api&lt;/strong> and &lt;strong>controllers&lt;/strong>&lt;/p>
&lt;pre>&lt;code>[root@ocp-master1 ~]# master-restart api api
2
[root@ocp-master1 ~]# master-restart controllers controllers
2
[root@ocp-master1 ~]# ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>Now we can check if all works good. Open in your browser your OCP webconsole and select the &lt;strong>RedHat&lt;/strong> identity provider
&lt;figure id="figure-oauth-page">
&lt;a data-fancybox="" href="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image10_hu839ab8518ef279d89bbef369a7dd57c7_64779_2000x2000_fit_lanczos_2.png" data-caption="oauth page">
&lt;img data-src="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image10_hu839ab8518ef279d89bbef369a7dd57c7_64779_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="1053">
&lt;/a>
&lt;figcaption>
oauth page
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>You&amp;rsquo;ll be redirect into the &lt;strong>RED HAT INTERNAL SSO&lt;/strong>
&lt;figure id="figure-sso-page">
&lt;a data-fancybox="" href="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image11_hu362bb386e16c0f16a194e0e87aaf5270_131251_2000x2000_fit_lanczos_2.png" data-caption="sso page">
&lt;img data-src="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image11_hu362bb386e16c0f16a194e0e87aaf5270_131251_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="1080">
&lt;/a>
&lt;figcaption>
sso page
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>And finally you&amp;rsquo;ll have access to your OpenShift
&lt;figure id="figure-webconsole">
&lt;a data-fancybox="" href="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image12_hu68c835f028cbfba169dcda78333a7220_333960_2000x2000_fit_lanczos_2.png" data-caption="webconsole">
&lt;img data-src="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image12_hu68c835f028cbfba169dcda78333a7220_333960_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="1080">
&lt;/a>
&lt;figcaption>
webconsole
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;pre>&lt;code>[root@ocp-master1 ~]# oc get user
NAME                  UID                                    FULL NAME         IDENTITIES
aspagnol@redhat.com   f2e04e82-40d9-11e9-ac71-005056a802f7   Andrea Spagnolo   RedHat:108476506439924310236
[root@ocp-master1 ~]# oc get identity
NAME                           IDP NAME   IDP USER NAME           USER NAME             USER UID
RedHat:108476506439924310236   RedHat     108476506439924310236   aspagnol@redhat.com   f2e04e82-40d9-11e9-ac71-005056a802f7
[root@ocp-master1 ~]# ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>Now you can manage your users directly in OpenShift and, for example, create an admin group and add the users&lt;/p>
&lt;pre>&lt;code>[root@ocp-master1 ~]# oc adm groups new admins
group.user.openshift.io/admins created
[root@ocp-master1 ~]# oc adm policy add-cluster-role-to-group cluster-admin admins
cluster role &amp;quot;cluster-admin&amp;quot; added: &amp;quot;admins&amp;quot;
[root@ocp-master1 ~]# oc adm groups add-users admins aspagnol@redhat.com
group &amp;quot;admins&amp;quot; added: &amp;quot;aspagnol@redhat.com&amp;quot;
[root@ocp-master1 ~]# oc describe groups admins
Name:          admins
Created:     About a minute ago
Labels:          &amp;lt;none&amp;gt;
Annotations:     &amp;lt;none&amp;gt;
Users:          aspagnol@redhat.com
[root@ocp-master1 ~]# ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
&lt;/code>&lt;/pre>
&lt;p>And we can check in the webconsole
&lt;figure id="figure-webconsole">
&lt;a data-fancybox="" href="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image13_hu8cf877d91d41146558a71c6423f1d258_343294_2000x2000_fit_lanczos_2.png" data-caption="webconsole">
&lt;img data-src="/post/google-oauth-as-identity-provider-with-red-hat-login-in-ocp-311/images/image13_hu8cf877d91d41146558a71c6423f1d258_343294_2000x2000_fit_lanczos_2.png" class="lazyload" alt="" width="1920" height="1053">
&lt;/a>
&lt;figcaption>
webconsole
&lt;/figcaption>
&lt;/figure>
&lt;/p>
&lt;p>You can find the complete documentation about the Identity Providers in OCP &lt;a href="https://docs.openshift.com/container-platform/3.11/install_config/configuring_authentication.html" target="_blank" rel="noopener">here&lt;/a>.
You can also configure the inventory  to add the GoogleIdentityProvider directly during the installation of OCP&lt;/p>
&lt;pre>&lt;code>openshift_master_identity_providers=[{'name': 'RedHat', 'challenge': 'false', 'login': 'true', 'kind': 'GoogleIdentityProvider', 'clientID': 'xxx', 'clientSecret': 'xxx', 'hostedDomain': 'redhat.com'}]‍‍
&lt;/code>&lt;/pre>
&lt;h2 id="please-note">Please Note&lt;/h2>
&lt;p>If yours OpenShift masters need a proxy to go to internet, the proxy &lt;strong>MUST&lt;/strong> have  &lt;strong>&lt;a href="/www.googleapis.com">https://www.googleapis.com&lt;/a>&lt;/strong> in allow because it&amp;rsquo;s needed by the server to get the oauth2 token&lt;/p>
&lt;p>That&amp;rsquo;s All!&lt;/p></description></item></channel></rss>