AzureDisk CSI Drivers on AzureStack Hub (AKS) Part 1
If you have read my previous article, you could get a Brief understanding how we can Protect AKS Persistent Workloads on Azure using @ProjectVelero and DellEMC PowerProtect Datamanager.
Velero and PowerProtect Datamanager Kubernetes Protection depends on the Container Storage Interface (CSI) for persistent Volumes.
We have Successfully qualified Azure AKS against PowerProtect 19.6 with CSI Driver version 0.7.
Having in my mind: If it runs on Azure, it should run on AzureStack Hub,
I was keen to get CSI run on AzureStack Hub AKS.
Well, we all know, AzureStack Hub is like Azure, but different, so it was a journey …
What works, how it works and what is/was missing
Before we start, let´s get some basics.
What was missing
AzureStack Hub allows you to deploy Kubernetes Clusters using the AKS Engine.
AKS Engine is a legacy tool to create ARM Template´s to deploy Kubernetes Clusters.
While Public Azure AKS Clusters will transition to Cluster API ( CAPZ ), AzureStack Hub only support AKS-Engine.
The Current ( officially Supported ) version of AKS-Engine for AzureStack Hub is v0.55.4.
It allows for Persistent Volumes, however, they would use the InTree Volume Plugin.
In order to make use of the Container Storage Interface (CSI), we first would need a CSI Driver that is able to talk to AzureStack Hub.
When I tried to implement the Azure CSI Drivers on AzureStack Hub last year, I essentially failed because of a ton of Certificate and API Issues.
With PowerProtect official Support for Azure, I started to dig into the CSI Drivers again.
I browsed through the existing Github Issues and PR´s, and found at least that some People are working on it.
And finally a got in touch with Andy Zhang. who maintains the azuredisk-csi-driver kuberenetes-sigs.
From an initial “it should” work, he connected me to the people doing E2E Test for AzureStack Hub.
Within 2 Days turnaround, we managed to fix all API and SSL related issues, and FINALLY GOT A WORKING VERSION !
how it works
I am not going to explain how to deploy AKS-Engine based Clusters on AzureStack Hub, there is a good explanation on the Microsoft Documentation Website.
Once you Cluster is deployed, you need to deploy the latest azuredisk-csi-drivers.
Microsoft Provides a guidance here that helm charts must be used to deploy the azuredisk-csi-drivers on AzureStack Hub.
Here is a Screenshot of the Helmchart from my Kubeapps Dashboard:
installing the driver
So first we add the Repo from Github:
With the Repo now added, we can now deploy the azuredisk-csi-driver Helm Chart.
When doing this, we will pass some setting to the deployment:
cloud=AzureStackCloud
This determines we run on AzureStack Hub and instructs the csi driver to load the Cloud Config from a File on the Master.
snapshot.enabled=true
This installs the csi-snapshot-controller that is required to expose Snapshot Functionality
We deploy the driver with:
This should install:
A Replica Set for the csi-azuredisk-controller with 2 Pods containing the following containers: mcr.microsoft.com/k8s/csi/azuredisk-csi
mcr.microsoft.com/oss/kubernetes-csi/csi-attacher
mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner
mcr.microsoft.com/oss/kubernetes-csi/csi-resizer
mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter
mcr.microsoft.com/oss/kubernetes-csi/livenessprobe
A Replica Set for the csi-snapshot-controller with 1 Pod:
One csi-azuredisk-node Pod per Node
and the corresponding CRD´s for the snapshotter
you can check the pods with
Adding the Storageclasses
When AKS is deployed using the Engine, most likely 3 Storageclasses are installed by the In-Tree Provider:
In order to make use of the CSI Storageclass, we need to add at least one new Storageclass:
create a class_csi.yaml with the following content
and then run
and check with
optional: Add a snapshot Class
Similar to the Storage Class, we may want to add a Snaphot Class if we want to clone volumes.
apply the below config with:
Testing stuff works
Following the Microsoft Documentation, create a Statefulset with Azure Disk Mount:
Verify the deployment with
The PVC will show the identical Volume Name as the Disk Name from The Portal / CLI
You know should have a Running azuredisk-csi-driver Environment on your AzureStack Hub.
Stay Tuned for Part 2 including DataProtection with PowerProtect Datamanager …
Using DELLEMC Powerprotect to Backup and Protect Managed AKS Clusters on Azure
This month we released the new PowerProtect Datamanager 19.6
Along with new and improved feature sets, we also released our first version of PPDM to the Azure Marketplace.
This allows Organizations to Protect the following workloads natively on Azure:
Vanilla Kubernetes and AKS
Applications (Oracle, SQL, SAP Hana)
Windows and Linux FS
Todays Blogpost will focus on the Protection of Managed Azure Kubernetes Service, AKS.
We will do so by fisrt Creating Protection Policies and Add Namespace Assts to ist, amd in a second Step add Namespaces automatically from Kubernetes Namespace Labels using Protection Rules
In order to get Started with PPDM on Azure, we will require 2 Solutions to be deployed to Azure:
Yes, we got you covered. Our Marketplace Temlate Deploys PPDM and PPDD in a One Stop Shopping Experience to your Environment.
Simply Type PPDM into the Azure Search and it directly take you to the Dell EMC PowerProtect Data Manager and Dell EMC PowerProtect DD Virtual Edition Marketplace Item
PPDM 19.6 Deployment
The Deployment will only allow you to select validated Machine Types, and will deploy the the DataDomain for using ATOS (Active Tier on Object Store)
I am not going into the Details of Basic PPDM od PPDD Configuration, so Please refer to
our PowerProtect Data Manager Azure Deployment Guide
takes you to all the details you may want/need to configure.
Using CLI ? We got you covered. Simply download the ARM Template using the Marketplace Wizard and you are good to go
You can always get a list of all DELLEMC Marketplace Items using
If you feel like terraforming the above, i have some templates ready in my terraforming DPS main repository to try. They are pretty modular and also covering Avamar and Networker. Feel free to reach out to me on how to use.
Prepare for our First AKS Cluster
assuming you followed the Instructions from the PPDM Deployment Guide, we now will deploy our first AKS Cluster to Azure.
AKS Cluster using CSI must be deployed from AZ CLI as the Date of this article.
If this is the first AKS Cluster using CSI in your Subscription, you will need to enable the feature using:
You can query the state using:
Once finished, we register the Provider with:
But we also need to update our AZ CLI to support the latest extensions for AKS. Therefore, run:
Deploy the AKS Cluster
Deploying the AKS Cluster creates a Service Principal in the Azure AD on every run.
You might want to use the same Service Principal again for Future Deployments, or Cleanup the SP after ( as it will not be deleted from AzureAD ).
If not already done, login to Azure from AZ CLI. Two Method´s, depending on your Workflow:
Using Device Login (good to Create the SP for RBAC):
Using a limited Service Principal, with already configured SP for AKS:
So we are good to create our first AKS Cluster.
Make sure you are scoped to the correct Subscription:
Once the deployment is done, we can get the Kubernetes Config for kubectl using:
In order to use Snapshots with the CSI Driver, we need to deploy the Snapshot Storageclass:
With that, the Preparation for AKS using CSI is done.
You can view your new StorageClasses with:
Add Kubernetes Secret for PPDM
In order to connect to AKS from PPDM, we need to create Service Account with Role based access.
A basic RBAC Template can be applied with:
After, you can export the Token to be used for PPDM with:
This is needed for the Credentials we Create in PPDM
Now sign in to PPDM and go to Credentials:
Add a Credential of Type Kubernetes, with the name of the secret we created in AKS, in the example it is ppdm-admin.
Copy the Service token in you got from above:
Add AKS Cluster to PPDM
Now we are good to add the new AKS Cluster to PPDM. Therefore, we go to the new Asset Sources Dashboard in PPDM:
Click on the Kubernetes Source to enable Kubernetes Assets.
After clicking OK on the Instructions, click Add on the Kubernetes
Fill in the Information for your AKS Cluster, and use the ppdm-admin Credentials:
Click on Verify Certificate to import the AKS API Server:
Then Click save to add the AKS Cluster. The AKS Cluster will be discovered automatically for us now, so go over to
Assets:
You will see that 2 new Namespaces have been deployed, velero-ppdm and powerprotect.
We are leveraging upstream velero and added support for DataDomain Boost Protocol.
In my example, i already added a mysql application using the Storageclass managed-csi for PV Claim, you can use my Template from here:
You can verify the Storage Class in PPDM by cliking on the “exclusions” link form the namespace vie in PPDM:
We now can create a Protection Policy. Therefore, go to Protection –> Protection Policies, and click Add to add your first policy
The step is similar to all other Protection Policy.
Make sure to select
Type Kubernetes
Purpose Crash Consistent
Select the Asset ( Namespace ) with the Managed CSI
Add at least a Schedule for Type Backup
Once done, monitor the System Job to finish Configuring the Protection Policy:
We can now start our First Protection by clicking Backup Now on the Protection Policy:
Once the Backup Kicked in, you can monitor the job by viewing the Protection Jon from the Jobs Menu:
As a Kubernetes User, you can also use your favorite Kubernetes tools to monitor what is happening behind the Curtains.
In you Application namespace ( here, mysql ), PowerProtect will create a “c-proxy”, which is essentially a datamover to claim the Snapshot PV:
I am using K9s to easy dive into Pods and Logs:
kubectl command:
A PVC will be created for the MYSQL Snapshot. You can verify that by viewing the PVC´s:
kubectl command:
See the details of the snapshot claiming by c-proxy:
kubectl command:
You can Browse your Backups now from PPDM UI by selecting assets –> Kubernetes Tab –> --> copies
Also, as a Kubernetes User, you can use the
kubectl command :
Automated Protection using Namespace Labels
One of the great features is the Automated Asset Selection for Kubernetes Assets using Namespace Labels.
In the Previous Example we have created a Protection Policy and added a Kubernetes Namespace Asset to it.
to be protected.
No we are adding K8S assets automatically by using Protection Rules and Kubernetes Labels.
For that, we select Protection Rules on PPDM.
On the Kubernetes Tab, we click on add to create a new Rule.
Select your existing Policy and Click on Next.
Configure an Asset filter with
Field: Namespace Label Includes
in my example I am using the Label *ppdm_policy=ppdm_gold*
Now we need to create the Namespace and an Application
I use a Wordpress deployment in my example. For this, create a new Directory on your machine and change into it
Create the Namespace template:
Create a Kustomization File:
Download my Wordpress Templates:
with the 4 files now in place, we can run the Deployment with:
i am using a Concourse Pipeline to do the above, but your out may look similar:
We can Verify the Namespace from K9s /kubectl/azure
Now we need to go to you PPDM and manually re-discover the AKSCluster. (default every 15 Minutes)
Once done, we go to Protection –> Protection Rules and manually run the Protection Rule we created earlier:
After Running, the new Asset is Assigned to the Protection Policy
We now can go to our Protection Policy, and the Asset Counted should include the new Asset.
You can Click edit to see / verify Wordpress has been Added :
The “Manage Exclusions” link in PVC´s Excluded Column will show you the PVC´s in the Wordpress Asset. It should be 2 PVC´s of type managed-csi:
Run the Protection Policy as before, but now only select the New Asset to be Backed up:
Troubleshooting
Backups fail
In case your Backups fail, redeploy the powerprotect-controller
by deleting the POD:
Deploying PowerProtect Datamanager (PPDM) to vSphere using govc and Powershell
when it comes to deploy Powerprotect Datamanger, we have a variety of options, for example
Terraform
Ansible
OVA deployment from vCenter UI
Saltstack
bash / Concourse
just to name few.
In this Post I focus on a Powershell Deployment leveraging mware govc and my PPDM Powershell Module.
Other Method´s will follow here over the next Couple of Days . . .
Requirements
Before we start the deployment, we need to check that we have
govc >= 0.23 insalled from Github Releases installed in a path as govc
my Powershell modules ( minimum : 0.19.6.2 ) for PPDM installed from PPDM Powershellusing
Step 1: Connecting to vSphere using govc
From a Powershell, we first need to connect to our vSphere Virtual Center By using the following code,
we can securely create a connection:
Step 2: deploying Powerprotect Datamanager ova using govc from Powershell
Requirement:
download the latest Powerprotect DataManager from DELLEMC Support ( login required )
first of all, we set our govc environment to have the Following Variables
( complete code snippet of step 2 below )
We then can connect to our vSphere Environment:
then we need to import the Virtual Appliance Specification from the ova using govc import.spec
the command would look like
Once we have the Configuration Data, we will change the vami keys in the “Property Mappings” to our desired Values
Now we need to import the OVA using govc import.ova with the settings we just created:
And change to the Correct “VM Network” for ethernet-0
Now, we can Power on the vm using govc vm.power …
… and wait for the Powerprotect Datamanager Services to be up and running.
In an Automated Scenario, one could query the URL http://fqdn.of.ppdm:443/#/fresh until receiving a 200 ok message from the Webserver ( see below script listing)
Step 3: Configure PPDM using PPDM-pwsh
if not already node, load the Modules by
The first step is to connect to the PPDM API.
You will be asked for for the username ad admin and Password of admin
We will retrieve a Bearer Token from the API, that will be used automatically for Subsequent requests in the Current Powershell Session.
PPDM-pwsh also will figure out your Powershell Version and therefore use different Methods to use non trusted certificates. ( -trustCert)
Once connected, we need to Accept the EULA for PPDM by using
The next step is to configure PPDM. For that, we need to specify Timezone, NTP Server and the new Password(s)
to get a list of timezones, run
In our example, we use Europe/Berlin.
Configuring the PPDM does only require 3 Parameters:
Timezone
Initial Password(s)
a List of NTP Sever(s)
We can use a Single Powershell Command to start the COnfiguration Process:
It will take up to 10 Minutes for PPDM to finish.
We can Monitor the Success Status with
In an Automation, we would wait for percentageCompleted -eq 100
You can now visit the PPDM Homepage from your Webbrowser to configure DataDomain, add a vCenter, add Kubernetes Clusters and more.
In my next Post we will do so as well from Powershell … stay tuned
this is an explanation of my ARM template for Harbor Container Registry. For details on Harbor head over to Project Harbor .
The Template will deploy an Ubuntu 18.04 VM with Docker Engine and Harbor the official from GitHub Repo.
You can opt to have selfsigned certificates created automatically for you OR use custom Certificates form you CA.
Before we start the deployment, wi need to check that we have
a ssh Public Key in Place ( i default to ~/.ssh/id_rsa.pub in my code samples)
connection to AzureStack from AZ CLI
Ubuntu 18.04 LTS Marketplace Image on Azurestack
Custom Script Extension for Linux on Azurestack
internet connection to dockerhub, canonical repo´s and GitHub
in the following examples, i deploy 2 Registry, 1 called devregistry with self-signed Certificates, and one called registry, to become my Production Registry using let´s encrypt Certificates
Testing Deployment and Parameters
first we need to a variable before we start or Test the Deployment.
The Variable DNS_LABEL_PREFIX marks the external hostname for the VM and will be registered with Azurestack´s DNS, eg
DNS_LABEL_PREFIX.location.cloudapp.dnsdomain
DNS_LABEL_PREFIX=devregistry # this should be the azurestack cloudapp dns name , e.g. Harbor, Mandatory
The name will also be used in the Generated Certificate for Self Signed Certs
If you are deploying using you own Certificates, you will also have provide you external hostname the Harbor Registry will use and you created your Certificate for: (i am using a wildcard Cert for my domain here)
EXTERNAL_HOSTNAME=registry.home.labbuildr.com #external dns name
you can validate you deployment with:
for Self Signed
DNS_LABEL_PREFIX=devregistry # this should be the azurestack cloudapp dns name , e.g. Harbor, Mandatory
az group create --name${DNS_LABEL_PREFIX:?variable is empty}--locationlocal
az deployment group validate --resource-group${DNS_LABEL_PREFIX:?variable is empty}\--template-uri"https://raw.githubusercontent.com/bottkars/201-azurestack-harbor-registry/master/azuredeploy.json"\--parameters\sshKeyData="$(cat ~/.ssh/id_rsa.pub)"\HostDNSLabelPrefix=${DNS_LABEL_PREFIX:?variable is empty}
note: i am using an inline variable check with :? do validate the variables are set. this is one of my best practices to not pass empty values to Parameters that are not validated / are allowed to be empty.
for user Provided Cerificates
for user provided Certificate, you also need to provide your
hostCert, the Cerificate content of you Host or Domain Wildcard Cert
certKey, the content of the matching Key for the above Certificate
and, if your registry is not one of the Mozilla trusted registries,
caCert, the Certificate content of you root ca for the docker engine
Un my Example, i use Let´s enrypt acme Certs and pass them via bash cat inline. Make sure to use Hyphens as the Certificates are Multiline values:
DNS_LABEL_PREFIX=registry #dns host label prefix EXTERNAL_HOSTNAME=registry.home.labbuildr.com #external dns name
az group create --name${DNS_LABEL_PREFIX:?variable is empty}--locationlocal
az deployment group validate --resource-group${DNS_LABEL_PREFIX:?variable is empty}\--template-uri"https://raw.githubusercontent.com/bottkars/201-azurestack-harbor-registry/master/azuredeploy.json"\--parameters\sshKeyData="$(cat ~/.ssh/id_rsa.pub)"\HostDNSLabelPrefix=${DNS_LABEL_PREFIX:?variable is empty}\caCert="$(cat ~/workspace/.acme.sh/home.labbuildr.com/ca.cer)"\hostCert="$(cat ~/workspace/.acme.sh/home.labbuildr.com/home.labbuildr.com.cer)"\certKey="$(cat ~/workspace/.acme.sh/home.labbuildr.com/home.labbuildr.com.key)"\externalHostname=${EXTERNAL_HOSTNAME:?variable is empty}
If there are no errors from above commands, we should be ready to start the deployment
starting the Deployment
start deployment for selfsigned registry
az deployment group create --resource-group${DNS_LABEL_PREFIX:?variable is empty}\--template-uri"https://raw.githubusercontent.com/bottkars/201-azurestack-harbor-registry/master/azuredeploy.json"\--parameters\sshKeyData="$(cat ~/.ssh/id_rsa.pub)"\HostDNSLabelPrefix=${DNS_LABEL_PREFIX:?variable is empty}
start the deployment for registry using your own CA Certs:
az deployment group create --resource-group${DNS_LABEL_PREFIX:?variable is empty}\--template-uri"https://raw.githubusercontent.com/bottkars/201-azurestack-harbor-registry/master/azuredeploy.json"\--parameters\sshKeyData="$(cat ~/.ssh/id_rsa.pub)"\HostDNSLabelPrefix=${DNS_LABEL_PREFIX:?variable is empty}\caCert="$(cat ~/workspace/.acme.sh/home.labbuildr.com/ca.cer)"\hostCert="$(cat ~/workspace/.acme.sh/home.labbuildr.com/home.labbuildr.com.cer)"\certKey="$(cat ~/workspace/.acme.sh/home.labbuildr.com/home.labbuildr.com.key)"\externalHostname=${EXTERNAL_HOSTNAME:?variable is empty}
validation / monitoring the installation
You can monitor the deployment in the Azurestack User Portal. The Resource group will be the name of the DNS_LABEL_PREFIX
once the Public IP is online, you can also ssh into the Harbor host to monitor the Custom Script execution:
there are 2 logs on the Harbor host that you may want to examine
install.log, the log file of the custom script installer
~/conductor/logs/deploy_harbor.sh.*.log, the log file of my harbor deployment
the installation should be successful one you see
✔ ----Harbor has been installed and started successfully.----
Testing the Registry
Logging into UI
First we log in to our Registry. For the DevRegistry, use you Browser and just browse to https://devregistry.local.cloudapp.azurestack.external (replace with you Azurestack region and Domain)
Chrome Users: as we use a selfsigned cert, you might want to type thisisunsafe in the Browserwindow.
The Login for the registry is :
username: admin ( if not changed in the deployment Parameter)
password: Harbor12345 ( i recommend changing the password, as the Password is in cleartext in the Harbor installation template)
If you are using your own CA and specified a different EXTERNAL_HOSTNAME you might need to create a DNS A Record pointing to your Harbors external IP Address
logging in and pushing an image from docker cli
To login from docker CLI it might me necessary to put the ROOT ca in dockers /etc/docker/certs.d directory.
On the Harbor Host, my custom installer has done this already for you:
ls /etc/docker/certs.d/registry.home.labbuildr.com/ca.crt
for Kubernetes Clusters, the same rule applies. I have created a DaemonSet for my Kubernetes Deployments, more on that in my next post.
you can test the login with
docker login -u admin -p Harbor12345
once logged in, we can try to tag one of the local docker images for our registry:
docker images
docker tag goharbor/harbor-core:v1.10.1 registry.home.labbuildr.com/library/harbor-core:v1.10.1
docker push registry.home.labbuildr.com/library/harbor-core:v1.10.1
Note: the default Project on our Harbor registry is called library, you can create Procects for your needs using Harbor UI or API.
You can verify the Image Push Operation by Browsing to the Library from the UI:
The template is currently available on my Git Repo: bottkars GiT
this is a short run tru my cf-for-k8s deployment on azurestack on AKS.
it will be updated continously. to understand the scripts i use, the the included links :-)
Before getting started:
cf-for-k8s installation is pretty straight forward. In this example i am using concourse-fi Concourse CI, and the dployment scripts are custom tasks based out of my github repo azs-concourse, where the pipeline used is platform automation
in the above call, the following aliases / variables are used:
AKS_PIPELINE: is the pipeline file
PLATFORM_VARS: Variables containing essential, pipeline independent Environment Variable, e.G. AzureStack Endpoints ( leading with AZURE_) and general var´s
this must resolve:
AKS_VARS : essentially, vars to control the AKS Engine ( cluster, size etc.. )
Example AKS_VARS:
azs_concourse_branch:tanzuaks:team:aksbucket:aksresource_group:cf-for-k8sorchestrator_release:1.15orchestrator_version:1.15.4orchestrator_version_update:1.16.1engine_tagfilter:"v0.43.([1])"master:dns_prefix:cf-for-k8svmsize:Standard_D2_v2node_count:3# 1, 3 or 5, at least 3 for upgradeabledistro:aks-ubuntu-16.04agent:0:vmsize:Standard_D3_v2node_count:3distro:aks-ubuntu-16.04new_node_count:6pool_name:linuxpoolostype:Linuxssh_public_key:ssh-rsa AAAAB...
we should now have a Paused cf-for-k8s Pipeline in our UI :
the Pipeline has the folllowing Task Flow:
deploy-aks-cluster ( c.a. 12 Minutes)
validate-aks-cluster ( sonobuoy basic validation, ca. 5 minutes)
install kubeapps ( default in my clusters, bitnami catalog for HELM Repo´s)
scale-aks-clusters ( for cf-for-k8s, we go to add some more nodes :-) )
deploy-cf-for-k8s
While the first 4 Tasks are default for my AKS Deployments, we will focus on the deploy-cf-for-k8s task
( note i will write about the AKS Engine Pipeline soon !)
deploy-cf-for-k8s
the deploy-cf-for-k8s task requires the following resources from either github or local storage:
azs-concourse (required tasks scripts)
bosh-cli-release ( latest version of bosh cli)
cf-for-k8s-master ( cf-for-k8smaster branch)
yml2json-release ( yml to json converter)
platform-automation-image (base image to run scripts)
also, the following variables need to be passed:
<<:*azure_env# youre azure-stack enfironment DNS_DOMAIN:((cf_k8s_domain))# the cf domainGCR_CRED:((gcr_cred))# credentials for gcr
where GCR_CRED contains the credentials to you Google Container Registry. You can provide them either form a secure store like credhub ( preferred way ), therefore simply load the credtials JSON file obtained from creating the secret with:
credhub set-n /concourse/<main or team>/gcr_cred -t json -v"$(cat ../aks/your-project-storage-creds.json)"
or load the variable to the pipeline from a YAML file in this example, gcr.yaml):
cf-for-k8s will be deployed using k14tools for an easy composable deployment.
the pipeline does that during the install
the pipeline may succeed, with cf-for-k8s not finished deploying. the deployment time varies on multiple factors including internet speed. however, the kapp deployment may still be ongoing when the pipeline is finished.
to monitor the deployment on your machine, you can install k14tools on your machine following the instructions on their site.
kapp requires a kubeconfig file to access your cluster.
copy you kubeconfig file ( the deploy-aks task stores that on your s3 store after deployment)
i have an alias that copies my latest kubeconfig file:
get-kubeconfig
to monitor / inspect the deployment, run
kapp inspect -a cf
in short, oince all pods in namespace cf-system are running, the system should be ready ( be aware, as there are daily changes, a deployment might fail)
k9s can give you a great overview of the running pods
connect to you cloudfoundry environment
cf-for-k8s depoloy´s a Service Type Loadbalancer per default.
the Pipeline create a dns a record for the cf domain you specified for the pipeline.
the admin´s password is autogenerated and stored in the cf-yalues.yml that get stored on your s3 location.
in my (direnv) environment, i receive the firl / the credentials with
get-cfvalues
get-cfadmin
connecting to cf api and logging in
get-cfvalues
cf api api.cf.local.azurestack.external --skip-ssl-validation
cf auth admin $(get-cfadmin)
there are no orgs and spaces defined per default, so we are going to create:
cf-for-k8s utilizes cloudnative buildpacks using kpack
In essence, a watcher it running in the kpack namespace to monitor new build request from cloudcontroller.
a “build pod” will run the build process of detecting, analyzing building and exportin the image
once a new request is detected, the clusterbuilder will create an image an ship it to the (gcr) registry.
from there
you can always view and monitor the image builder process by viewing the image and use the log_tail utility to view the builder logs:
kubectl get images --namespace cf-system
logs --image caf42222-dc42-45ce-b11e-7f81ae511e06 --namespace cf-system