Pubstack Blog - Cloud computing and engineering articles2023-10-06T13:39:11+00:00https://www.pubstack.com/Carlos Camacho7 career skills: Build your networkI’m writing this blog post as a partial output from the work done with James Mernim in the RHT’s mentoring program. We reviewed a framework called the “7 careers skills’’ in which we focused on one specifically called ‘building your...2023-10-03T00:00:00+00:00https://www.pubstack.com/blog/2023/10/03/build-your-networkCarlos Camacho<p>I’m writing this blog post as a partial output from the work done with James Mernim in the RHT’s mentoring program. We reviewed a framework called the “7 careers skills’’ in which we focused on one specifically called ‘building your network’.</p>
<h2 id="careers-skills-build-your-network">Careers skills: Build your network</h2>
<p>The -7 career skills- is a framework part of the official career coaching program in Red Hat where the main goal is to provide a growth mindset and the tools to move forward in the direction the coachee decides to go.</p>
<p>The reasons for writing this post are many, among them, I would like to introduce the –7 careers skills– framework, with the initial description of the exercise called ‘building your network’. Also, to highlight the Importance of having a set of well defined ‘Career Tools’ as part of an officially supported ‘career coaching program’.</p>
<p>The content that will be described here is a written exercise designed to help individuals reflect on their career, gain insights, and develop a growth-mindset. This exercise will map a set of relationships with individuals, whether personal, informational, or structural relationships, play a vital role in career development and to actually materialize them as “opportunities for growth”. This post aims to educate readers about the significance of these relationships and how they can be leveraged.</p>
<p>In Red Hat, it is commonly used a coaching framework called the ‘7 career skills’, these skills are: Build your network, Stretch yourself, Adapt to change, Reflect and plan, Know yourself, Spot opportunities, and Build your brand.</p>
<p>Today I’m going to describe “Building your network” and we will end by doing an exercise to realize how big or healthy your ‘network’ is.</p>
<p>A career tool is a written exercise for a mentee or coachee to complete assisting their thinking, insights and reflection, and they help build up the mindset moving onwards and providing structure, focus and if possible some lightbulb moments.</p>
<h2 id="mapping-your-network-of-career-supporters">Mapping your network of career supporters</h2>
<p>When we think about our career, people are a key differentiator, the right person in the right place will boost your motivation to make you thrive. Chances are that when you reflect on your career, you’ll identify particular people who have played a major role, both in a positive and negative way. It can be family, friends or co-workers who support us. Often, it’s a great manager who gives us a chance. We succeed via our relationships with others “It’s usually how we get access to opportunities for career development and learning”.</p>
<h2 id="spoiler-alert">Spoiler alert!!!</h2>
<p>Did you watch the movie “Oppenheimer”? (if not, I will love to spoil the first 5 minutes for you)…</p>
<p>Robert Oppenheimer was a theoretical physicist studying in the Cambridge’s Cavendish Laboratory
under a professor named Blackett. He didn’t seem happy working there “in the lab”, this, until the point
of adding cyanide to his professor’s apple. Then, someone in his network called Niels Bohr, said to him
“you don’t seem happy here”… “Get out of Cambridge and go somewhere they let you think”… “Where to go”…
“Go with Born (University of Göttingen)”… Then, the movie actually starts building its arguments and you
will see Oppenheimer’s evolution across the time.</p>
<hr />
<ul>
<li>The main goal of this exercise is to help you to “find your Niels Bohr” -</li>
</ul>
<hr />
<p>There is also some interesting research and academic theory around how these networks function. Sociologist Mark Granovetter developed the idea of ‘strong’ and ‘weak’ ties within networks. Weak ties are relationships outside of your immediate circle, and they are more important for spotting opportunities and accessing new information. Granovetter also found that a person’s number of connections was less important than their diversity. This activity is designed to help you think about and understand your network. It draws on theory about the kinds of relationships we need to build to have the right kind of career support in place.</p>
<h2 id="relationships">Relationships</h2>
<p>There are three categories of relationships which can be helpful:</p>
<ul>
<li>Personal: Those who believe in you, listen to you and whom you trust. You can show ‘vulnerability’ and they can provide reassurance. Surround yourself with positive people. Those in your emotional support network will:
<ul>
<li>Be willing to listen to you.</li>
<li>Provide reassurance.</li>
<li>Be dependable.</li>
<li>Be a person to go to for advice.</li>
<li>Be a person you can trust and who trusts you.</li>
<li>Encourages you to be your best and has faith in your abilities.</li>
<li>Is concerned that you reach your goals.</li>
<li>Provides a positive but honest voice.</li>
</ul>
</li>
<li>
<p>Informational: This type of connection is about know-how. These are the people who keep you in the loop about what is going on, tell you about the unwritten rules and share their knowledge and experience with you. Those in your informational support network will:</p>
<ul>
<li>Share their know-how, knowledge and experience with you.</li>
<li>Keep you in touch with what is going on.</li>
<li>Decode unwritten laws.</li>
<li>Inform you of company policies and procedures that may affect you.</li>
</ul>
</li>
<li>
<p>Structural: These are the connectors to other functions. These people will endorse you, champion you, and help you be more visible. Find people who will mentor or champion you. Those in your structural support network will:</p>
<ul>
<li>Connect you to other functions.</li>
<li>Endorse you.</li>
<li>Champion you.</li>
<li>Help you be more visible.</li>
<li>Help you maximize your exposure.</li>
</ul>
</li>
</ul>
<h3 id="example-of-the-exercise-build-your-network">Example of the exercise “Build your network”</h3>
<p>Here is an example network map, the three types of
relationship are represented as a Venn diagram,
you’ll notice that there might be some overlaps
depending on how many people you are able to allocate
in the diagram.</p>
<p><img src="/static/build_network/network_map.png" alt="" /></p>
<p>Once you finish allocating all the people in the diagram
(write their names by hand where it says “Insert text”)
it should help make an abstract idea of your support
network more concrete and understandable.</p>
<p>Suggestions for Expanding One’s Network can be many, these
suggestions will try to provide some insights and ideas about
some practical steps they can take to improve their network.
Among them, contributing to other organizations, attending
conferences, and engaging in stronger communication roles within
the same organization, internally, we also have the technical gigs program.
All this with the idea of provisioning some practical steps you can take
to improve your network.</p>
<p>In summary, this post aims to educate and guide readers on the importance of building and maintaining a supportive network in their careers. It should introduce the –7 career skills- framework with emphasis in building your network, with a glimpse of some theoretical and practical advice to help individuals strengthen their professional connections and ultimately advance in their careers.</p>
Kubernetes MLOps 101: Deployment and usageMLOps stands for Machine Learning Operations. MLOps is a core function of Machine Learning engineering, focused on streamlining the process of taking machine learning models to production and then maintaining and monitoring them. MLOps is a collaborative function, often comprising...2023-07-17T00:00:00+00:00https://www.pubstack.com/blog/2023/07/17/mlops-101Carlos Camacho<p><a href="https://www.databricks.com/glossary/mlops">MLOps</a> stands
for Machine Learning Operations. MLOps is a core
function of Machine Learning engineering, focused on streamlining the
process of taking machine learning models to production and then maintaining
and monitoring them. MLOps is a collaborative function, often comprising
data scientists, DevOps engineers, and IT.</p>
<p>Kubeflow is an open-source platform built on Kubernetes
designed to simplify the deployment and management of
machine learning (ML) workflows. It provides a set of
tools and frameworks that enable data scientists and ML engineers
to build, deploy, and scale ML models efficiently.</p>
<p>Kubeflow’s goal is to facilitate the adoption of machine learning
best practices and enable reproducibility, scalability, and
collaboration in ML workflows.</p>
<h2 id="tldr">TL;DR;</h2>
<p>This post will be an initial approach to how to
deploy some MLOps tools on top of OpenSHift 4.12,
among them Kubeflow, and the OpenDataHub project.
The goal of this tutorial is to play with the technology
and have a learning environment as fast as possible.</p>
<h3 id="references-for-future-steps-workshops-and-activities">References for future steps, workshops, and activities</h3>
<ol>
<li><a href="https://developers.redhat.com/developer-sandbox/activities/use-rhods-to-master-nlp">https://developers.redhat.com/developer-sandbox/activities/use-rhods-to-master-nlp</a></li>
<li><a href="https://demo.redhat.com/">https://demo.redhat.com/</a> “MLOps with Red Hat OpenShift Data Science: Retail Coupon App Workshop”</li>
</ol>
<h2 id="1-deploying-the-infrastructure">1. Deploying the infrastructure</h2>
<h3 id="requirements-installation">Requirements installation</h3>
<p>KubeInit is a project that aims to simplify
the deployment of Kubernetes distributions.
It provides a set of Ansible playbooks and collections
to automate the installation and configuration of
Kubernetes clusters.</p>
<p>Install the required dependencies by running the following commands:</p>
<pre><code class="language-bash"># Install the requirements assuming python3/pip3 is installed
python3 -m pip install --upgrade pip \
shyaml \
ansible \
netaddr
# Clone the KubeInit repository and navigate to the project's directory
git clone https://github.com/Kubeinit/kubeinit.git
cd kubeinit
# Install the Ansible collection requirements
ansible-galaxy collection install --force --requirements-file kubeinit/requirements.yml
# Build and install the collection
rm -rf ~/.ansible/collections/ansible_collections/kubeinit/kubeinit
ansible-galaxy collection build kubeinit --verbose --force --output-path releases/
ansible-galaxy collection install --force --force-with-deps releases/kubeinit-kubeinit-`cat kubeinit/galaxy.yml | shyaml get-value version`.tar.gz
</code></pre>
<h3 id="deploy-a-single-node-412-okd-cluster-as-our-development-environment">Deploy a single node 4.12 OKD cluster as our development environment</h3>
<p>This step will get us a single-node OpenShift 4.12 development environment.
From the hypervisor run:</p>
<pre><code class="language-bash"># Run the playbook
ansible-playbook \
-v --user root \
-e kubeinit_spec=okd-libvirt-1-0-1 \
-e hypervisor_hosts_spec='[{"ansible_host":"nyctea"},{"ansible_host":"tyto"}]' \
-e controller_node_disk_size='300G' \
-e controller_node_ram_size='88080384' \
./kubeinit/playbook.yml
</code></pre>
<p>To cleanup the environment include the <code>-e kubeinit_stop_after_task=task-cleanup-hypervisors</code> variable.</p>
<p>Depending on the value of <code>kubeinit_spec</code> we can choose between multiple K8s distributions,
determine how many controller or compute nodes, and how many hypervisors we will like
to spread the cluster guests, for more information go to the
<a href="https://github.com/kubeinit/kubeinit">Kubeinit Github</a> repository or the
<a href="https://docs.kubeinit.org/">docs website</a>.</p>
<h3 id="configuring-the-storage-pv-backend">Configuring the storage PV backend</h3>
<p>From the hypervisor connect to the service guest by running:</p>
<pre><code class="language-bash">ssh -i ~/.ssh/okdcluster_id_rsa root@10.0.0.253
# Install some tools
yum groupinstall 'Development Tools' -y
yum install git -y
</code></pre>
<p>To support the PersistentVolumeClaims (PVCs) from the Kubeflow deployment,
a storage PV backend needs to be set up.
We will setup some basic default storage class to support the PVC from the Kubeflow deployment.
Follow the steps below to configure the backend:</p>
<pre><code class="language-bash"># Create a new namespace for the NFS provisioner
oc new-project nfsprovisioner-operator
sleep 30;
# Deploy NFS Provisioner operator in the terminal
cat << EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: nfs-provisioner-operator
namespace: openshift-operators
spec:
channel: alpha
installPlanApproval: Automatic
name: nfs-provisioner-operator
source: community-operators
sourceNamespace: openshift-marketplace
EOF
# We assume this is a single node deployment, we will get the first worker node
export target_node=$(oc get nodes | grep worker | head -1 | cut -d' ' -f1)
# We assign the NFS provisioner role to our first worker node
oc label node/${target_node} app=nfs-provisioner
</code></pre>
<p>Now, we need to configure the worker node filesystem to
support the location where the PVs will be stored.</p>
<pre><code class="language-bash"># ssh to the node
oc debug node/${target_node}
# Configure the local folder for the PVs
chroot /host
mkdir -p /home/core/nfs
chcon -Rvt svirt_sandbox_file_t /home/core/nfs
exit; exit
</code></pre>
<h3 id="setting-up-the-container-registries-credentials">Setting up the container registries credentials</h3>
<p>Connect to the worker node to configure the OpenShift registry token,
get the OpenShift registry token list (pullsecrets)
from <a href="https://cloud.redhat.com/openshift/install/pull-secret">cloud.redhat.com</a>
and store it locally as <code>/root/config.json</code></p>
<pre><code class="language-bash"># There is a local pull-secret for pulling from the internal cluster container registry
# TODO: Make sure we have the local registry and the RHT credentials together
# By default there is a local container registry in this single node cluster
# and those credentials are deployed in the OCP cluster.
# Merge the local rendered registry-auths.json from the services guest
# With the token downloaded from cloud.redhat.com
# store it as rht-registry-auths.json and merge them with:
cd
jq -s '.[0] * .[1]' registry-auths.json rht-registry-auths.json > full-registry-auths.json
oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=full-registry-auths.json
# oc create secret generic pull-secret -n openshift-config --type=kubernetes.io/dockerconfigjson --from-file=.dockerconfigjson=/root/downloaded_token.json
# oc secrets link default pull-secret -n openshift-config --for=pull
# Refer to: https://access.redhat.com/solutions/4902871 for further information
</code></pre>
<p>Make sure the registry secret is correct by printing its value.</p>
<pre><code class="language-bash">oc get secret pull-secret -n openshift-config --template=''
</code></pre>
<p>Or configure it in the cluster per node by (not required and done in the previous step):</p>
<pre><code class="language-bash">ssh core@10.0.0.1 # (First controller node, in this case, a single node cluster)
podman login registry.redhat.io
podman login registry.access.redhat.com
# Username: ***@redhat.com
# Password:
# Login Succeeded!
</code></pre>
<p>Create the NFSProvisioner Custom Resource</p>
<pre><code class="language-bash">cat << EOF | oc apply -f -
apiVersion: cache.jhouse.com/v1alpha1
kind: NFSProvisioner
metadata:
name: nfsprovisioner-sample
namespace: nfsprovisioner-operator
spec:
nodeSelector:
app: nfs-provisioner
hostPathDir: "/home/core/nfs"
EOF
sleep 30;
# Check if NFS Server is running
oc get pod
# Update annotation of the NFS StorageClass
oc patch storageclass nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
# Check the default next to nfs StorageClass
oc get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs (default) example.com/nfs Delete Immediate false 4m29s
</code></pre>
<p>Create a test PVC to verify the claims can be filled correctly</p>
<pre><code class="language-bash"># Create a test PVC
cat << EOF | oc apply -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs-pvc-example
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
storageClassName: nfs
EOF
# Check the test PV/PVC
oc get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-e30ba0c8-4a41-4fa0-bc2c-999190fd0282 1Mi RWX Delete Bound nfsprovisioner-operator/nfs-pvc-example nfs 5s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/nfs-pvc-example Bound pvc-e30ba0c8-4a41-4fa0-bc2c-999190fd0282 1Mi RWX nfs 5s
</code></pre>
<p>The output shown here indicates that the NFS server,
NFS provisioner, and NFS StorageClass are all working fine.
You can use the NFS StorageClass for any test scenarios that need PVC.</p>
<p>The following snippets allow to play with the applications
deployed to the single-node cluster providing an easy revert option.</p>
<blockquote>
<p><strong><em>Note:</em></strong> For rollbacking the env and try
new things, instead of redeploying (30 mins)
try restoring the snapshots (1 min).</p>
</blockquote>
<pre><code class="language-bash">########
# BACKUP
vms=( $(virsh list --all | grep running | awk '{print $2}') )
# Create an initial snapshot for each VM
for i in "${vms[@]}"; \
do \
echo "virsh snapshot-create-as --domain $i --name $i-fresh-install --description $i-fresh-install --atomic"; \
virsh snapshot-create-as --domain "$i" --name "$i"-fresh-install --description "$i"-fresh-install --atomic; \
done
# List current snapshots (After they should be already created)
for i in "${vms[@]}"; \
do \
virsh snapshot-list --domain "$i"; \
done
########
#########
# RESTORE
vms=( $(virsh list --all | grep running | awk '{print $2}') )
for i in "${vms[@]}"; \
do \
virsh shutdown $i;
virsh snapshot-revert --domain $i --snapshotname $i-fresh-install --running;
virsh list --all;
done
#########
#########
# DELETE
vms=( $(virsh list --all | grep -E 'running|shut' | awk '{print $2}') )
for i in "${vms[@]}"; \
do \
virsh snapshot-delete --domain $i --snapshotname $i-fresh-install;
done
#########
</code></pre>
<h2 id="2-deploying-the-mlops-applications">2. Deploying the MLOps applications</h2>
<p>This section will explore the installation of
different MLOps components in an OCP 4.12 cluster.</p>
<p>Select one subsection exclusively (2.1, 2.2, or 2.3 but not all of them).</p>
<p>From the services pod execute:</p>
<h3 id="21-deploying-the-kubeflow-pipelines-component">2.1 Deploying the Kubeflow pipelines component</h3>
<pre><code class="language-bash">###################################
# Installing the Kubeflow templates
# https://www.kubeflow.org/docs/components/pipelines/v1/installation/localcluster-deployment/#deploying-kubeflow-pipelines
#
###############################
# Kubeflow pipelines standalone
# We will deploy kubeflow 2.0.0
#
cd
export PIPELINE_VERSION=2.0.0
kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION"
kubectl wait --for condition=established --timeout=60s crd/applications.app.k8s.io
kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/platform-agnostic-pns?ref=$PIPELINE_VERSION"
sleep 30
oc get pods -n kubeflow
NAME READY STATUS RESTARTS AGE
cache-deployer-deployment-76f8bc8897-t48vs 1/1 Running 0 3m
cache-server-65fc86f747-2rg7t 1/1 Running 0 3m
metadata-envoy-deployment-5bf6bbb856-tqw85 1/1 Running 0 3m
metadata-grpc-deployment-784b8b5fb4-l94tm 1/1 Running 3 (52s ago) 3m
metadata-writer-647bfd9f77-m5c8w 1/1 Running 0 3m
minio-65dff76b66-vstbk 1/1 Running 0 3m
ml-pipeline-86965f8976-qbgqs 1/1 Running 0 3m
ml-pipeline-persistenceagent-dbc9d95b6-g7nsb 1/1 Running 0 3m
ml-pipeline-scheduledworkflow-6fbf57b54d-446f5 1/1 Running 0 2m59s
ml-pipeline-ui-5b99c79fc8-2vbcp 1/1 Running 0 2m59s
ml-pipeline-viewer-crd-5fdb467bb5-rktvs 1/1 Running 0 2m59s
ml-pipeline-visualizationserver-6cf48684f5-b929v 1/1 Running 0 2m59s
mysql-c999c6c8-jzths 1/1 Running 0 2m59s
workflow-controller-6c85bc4f95-lmkrg 1/1 Running 0 2m59s
##########################
</code></pre>
<p>To access the Kubeflow pipelines UI, follow these steps:</p>
<pre><code class="language-bash"># Create an initial SSH hop from your machine to the hypervisor
ssh -L 38080:localhost:38080 root@labserver
# A second hop will connect you to the services guest
ssh -L 38080:localhost:8080 -i ~/.ssh/okdcluster_id_rsa root@10.0.0.253
# Once we are in a network segment with access to the Kubeflow services
# we can forward the traffic to the ml-pipeline-ui pod
kubectl port-forward -n kubeflow svc/ml-pipeline-ui 8080:80
</code></pre>
<p>After running the hop/forwarding commands, you can access the Kubeflow
pipelines UI by opening your browser and visit <code>http://localhost:38080</code></p>
<p><img src="/static/kubeflow/kubeflow_ui.png" alt="" /></p>
<p>Once all the pods are running the UI should work as expected without reporting any issue.</p>
<h3 id="22-deploying-the-modelmesh-service-operator">2.2 Deploying the modelmesh service operator</h3>
<pre><code class="language-bash">cd
git clone https://github.com/opendatahub-io/modelmesh-serving.git --branch release-v0.11.0-alpha
cd modelmesh-serving
cd opendatahub/quickstart/basic/
./deploy.sh
</code></pre>
<p>Make sure the pvc, replicaset, pod, and service are all running before continuing.</p>
<pre><code class="language-bash"># Check the OpenDataHub ModelServing's inference service
oc get isvc -n modelmesh-serving
# NAME URL READY PREV LATEST PREVROLLEDOUTREVISION LATESTREADYREVISION AGE
# example-onnx-mnist grpc://modelmesh-serving.modelmesh-serving:8033 True 42m
</code></pre>
<pre><code class="language-bash">#Check the URL for the deployed model
oc get routes
# NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
# example-onnx-mnist example-onnx-mnist-modelmesh-serving.apps.okdcluster.kubeinit.local /v2/models/example-onnx-mnist modelmesh-serving 8008 edge/Redirect None
</code></pre>
<p>Let’s test a model from the manifests folder.</p>
<pre><code class="language-bash">export HOST_URL=$(oc get route example-onnx-mnist -ojsonpath='{.spec.host}' -n modelmesh-serving)
export HOST_PATH=$(oc get route example-onnx-mnist -ojsonpath='{.spec.path}' -n modelmesh-serving)
export COMMON_MANIFESTS_DIR='/root/modelmesh-serving/opendatahub/quickstart/common_manifests'
curl --silent --location --fail --show-error --insecure https://${HOST_URL}${HOST_PATH}/infer -d @${COMMON_MANIFESTS_DIR}/input-onnx.json
# This is the expected output
# {"model_name":"example-onnx-mnist__isvc-b29c3d91f3","model_version":"1","outputs":[{"name":"Plus214_Output_0","datatype":"FP32","shape":[1,10],"data":[-8.233053,-7.7497034,-3.4236815,12.3630295,-12.079103,17.266596,-10.570976,0.7130762,3.321715,1.3621228]}]}
</code></pre>
<h3 id="23-deploying-all-kubeflow-components-wip-pending-failing-resources">2.3 Deploying all Kubeflow components (WIP pending failing resources)</h3>
<pre><code class="language-bash">##########################
# Complete install
# From: https://github.com/kubeflow/manifests#installation
cd
git clone https://github.com/kubeflow/manifests kubeflow_manifests
cd kubeflow_manifests
while ! kustomize build example | awk '!/well-defined/' | kubectl apply -f -; do echo "Retrying to apply resources"; sleep 10; done
##########################
# WIP checking Security Context Constraints by executing:
# We deploy all the Security Context Constraints
# cd
# git clone https://github.com/opendatahub-io/manifests.git ocp_manifests
# cd ocp_manifests
# while ! kustomize build openshift/openshiftstack/application/openshift/openshift-scc/base | kubectl apply -f -; do echo "Retrying to apply resources"; sleep 10; done
# while ! kustomize build openshift/openshiftstack/application/openshift/openshift-scc/overlays/istio | kubectl apply -f -; do echo "Retrying to apply resources"; sleep 10; done
# while ! kustomize build openshift/openshiftstack/application/openshift/openshift-scc/overlays/servicemesh | kubectl apply -f -; do echo "Retrying to apply resources"; sleep 10; done
</code></pre>
<h3 id="checking-that-all-the-services-are-running">Checking that all the services are running</h3>
<p>To ensure that all services that were deployed are
running, check pod, service, replicaset,
deployment, and pvc.</p>
<h2 id="3-running-kubeflow-creating-experiments-pipelines-and-executions">3. Running Kubeflow (Creating experiments, pipelines, and executions)</h2>
<p>Kubeflow simplifies the development and deployment of machine learning
pipelines by providing a higher level of abstraction over Kubernetes.
It offers a resilient framework for distributed computing,
allowing ML pipelines to be scalable and production-ready.</p>
<div style="float: left; width: 400px; background: white;">
<img src="/static/kubeflow/kubeflow_components.webp" alt="" style="border:15px solid #FFF" />
</div>
<p>In this section, we will explore the process of creating a machine learning
pipeline using Kubeflow, covering various components and their integration
throughout the ML solution lifecycle, that is creating the experiment, pipeline, and run.</p>
<p>The following components are the main organizational structure within Kubeflow.</p>
<ul>
<li>
<p>A Kubeflow Experiment is a logical grouping of machine learning runs or trials.
It provides a way to organize and track multiple iterations of training or evaluation experiments.
Experiments help in managing different versions of models, hyperparameters, and data configurations.</p>
</li>
<li>
<p>A Kubeflow Pipeline is a workflow that defines a series of interconnected steps or components for an end-to-end machine learning process.
It allows for the orchestration and automation of complex ML workflows, including data preprocessing, model training, evaluation, and deployment.
Pipelines enable reproducibility, scalability, and collaboration in ML development by providing a visual representation of the workflow and its dependencies.</p>
</li>
<li>
<p>A Kubeflow Run refers to the execution of a pipeline or an individual component within a pipeline.
It represents a specific instance of running a pipeline or a component with specific inputs and outputs.
Runs can be triggered manually or automatically based on predefined conditions or events.
Each run captures metadata and logs, allowing for easy tracking, monitoring, and troubleshooting of the pipeline’s execution.</p>
</li>
</ul>
<h3 id="running-our-first-experiment">Running our first experiment</h3>
<p>Text</p>
<h3 id="creating-an-experiment">Creating an experiment</h3>
<p>Text</p>
<h3 id="creating-the-pipeline">Creating the pipeline</h3>
<p>Text</p>
<h3 id="running-the-pipeline">Running the pipeline</h3>
<p>Text</p>
<h2 id="conclusions">Conclusions</h2>
<p>Deploying Kubeflow using KubeInit simplifies the process of setting up a
scalable and reproducible ML workflow platform. With KubeInit’s Ansible
playbooks and collections, you can automate the deployment of Kubernetes
and easily configure the necessary components for Kubeflow.
By leveraging Kubeflow’s templates and services, data scientists and ML
engineers can accelerate the development and deployment of machine learning models.</p>
<h2 id="interesting-errors">Interesting errors</h2>
<pre><code class="language-bash"> Warning Failed 8m1s (x6 over 11m) kubelet Error: ImagePullBackOff
Warning Failed 6m16s (x5 over 11m) kubelet Failed to pull image "mysql:8.0.29": rpc error: code = Unknown desc = reading manifest 8.0.29 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal BackOff 103s (x28 over 11m) kubelet Back-off pulling image "mysql:8.0.29"
</code></pre>
<h2 id="the-end">The end</h2>
<p>If you like this post, please try the code, raise issues, and ask for more details, features, or
anything that you feel interested in. Also, it would be awesome if you become a stargazer to catch up
updates and new features.</p>
<p>This is the <a href="https://github.com/kubeinit/kubeinit">main repository</a> for the infrastructure automation
based on Ansible.</p>
<p>Happy Kubeflow’ing & Kubeinit’ing!</p>
The homelab projectHomelabs, also known as home data center, refers to a personal setup created by technology enthusiasts or professionals in their homes for learning, experimentation, or production purposes. It typically consists of a collection of computer hardware, networking equipment, and software...2023-05-26T00:00:00+00:00https://www.pubstack.com/blog/2023/05/26/the-homelab-projectCarlos Camacho<p>Homelabs, also known as home data center, refers to a personal setup created by
technology enthusiasts or professionals in their homes for learning, experimentation,
or production purposes. It typically consists of a collection of computer hardware,
networking equipment, and software that simulates or replicates a professional IT environment.</p>
<p>Homelabs serve various purposes, including:</p>
<ul>
<li>
<p>Learning and Skill Development: Many individuals use homelabs to enhance their
knowledge and skills in areas such as system administration, networking, virtualization,
storage, and cybersecurity. It provides a hands-on environment to experiment with different
technologies, configurations, and software without the constraints of a production environment.</p>
</li>
<li>
<p>Testing and Prototyping: Homelabs offer a safe and controlled environment to test new software,
applications, or hardware configurations. It allows individuals to evaluate their performance,
compatibility, and feasibility before deploying them in a production environment.</p>
</li>
<li>
<p>Personal Projects and Services: Some people build homelabs to host personal projects, services,
or applications. For example, they may set up media servers, game servers, file sharing platforms,
or websites to meet their specific needs or interests.</p>
</li>
<li>
<p>Home Automation and Internet of Things (IoT): Homelabs can be utilized for setting up smart home
automation systems and experimenting with IoT devices. This enables individuals to control and
manage various aspects of their home, such as lighting, temperature, security, and entertainment systems.</p>
</li>
<li>
<p>Data Storage and Backup: Homelabs often include storage solutions like Network-Attached Storage
(NAS) or storage area network (SAN) devices, which allow users to store and backup their data locally.</p>
</li>
</ul>
<p>Homelabs can range from a single server or a few network devices to a comprehensive setup with
multiple servers, switches, routers, and other equipment. They can be built using off-the-shelf
components or repurposed enterprise-grade hardware, depending on the user’s requirements, budget,
and level of expertise.</p>
<p><img src="/static/homelab/00_intro/00_homelab.jpg" alt="" /></p>
<p>While building and maintaining a homelab can be challenging, it can also be a rewarding and educational experience.
The availability of upstream communities and resources can help you overcome difficulties and find support from
other homelab enthusiasts.
Start with a clear plan, research thoroughly, and take it step by step, adjusting your goals and complexity as you
gain experience.</p>
<h2 id="origins">Origins</h2>
<div style="float: left; width: 230px; background: white;"><img src="/static/homelab/00_intro/02_humble_home_lab.jpg" alt="" style="border:15px solid #FFF" /></div>
<p>Datacenters are large-scale facilities designed to house and manage extensive computing infrastructure
for businesses or organizations. They have high-end server racks, robust networking equipment, redundant
power and cooling systems, and advanced security measures. Datacenters offer scalability, reliability,
and support for critical operations, often involving multiple clients or users. In contrast, homelabs
are personal setups in individuals’ homes used for learning, experimentation, or small-scale production.
They typically consist of a collection of hardware, networking devices, and software. Homelabs provide a
controlled environment for skill development, testing, and hosting personal projects, but on a smaller
scale with limited resources and fewer users.</p>
<div style="float: right; width: 230px; background: white;"><img src="/static/homelab/00_intro/03_ideal_rack.jpg" alt="" style="border:15px solid #FFF" /></div>
<p>The ideal homelab architecture typically includes a combination of components to create a versatile and
powerful setup. It starts with robust server hardware, whether it’s rack-mounted servers or repurposed
desktop machines, equipped with ample processing power, memory, and storage capacity. Networking equipment
such as routers, switches, and firewalls play a crucial role in connecting devices and enabling communication
within the homelab. Virtualization software like VMware or Hyper-V allows for the creation and management of
virtual machines, enabling efficient resource utilization. Storage solutions, including NAS
(Network Attached Storage) or SAN (Storage Area Network), provide ample storage capacity for data and backups.
Additionally, monitoring and management tools help ensure the stability and performance of the homelab environment.
The ideal architecture emphasizes flexibility, scalability, and the ability to experiment with various technologies
and configurations to meet specific learning or project requirements.</p>
<div style="float: left; width: 230px; background: white;"><img src="/static/homelab/00_intro/04_networking.png" alt="" style="border:15px solid #FFF" /></div>
<p>Networking is a crucial component in an ideal homelab architecture, enabling devices to communicate and share resources
effectively. It starts with a reliable and feature-rich router that provides internet access and manages IP addresses.
A network switch connects multiple devices within the homelab, allowing seamless communication. Managed switches offer
advanced features for improved network performance and segmentation. Implementing a firewall ensures the security of the
homelab, protecting against unauthorized access. Additionally, utilizing technologies like VLANs and QoS can enhance
network efficiency and prioritize traffic. Proper networking setup enables connectivity, facilitates the sharing of
services and resources, and creates a robust foundation for various homelab activities and projects.</p>
<div style="float: right; width: 230px; background: white;"><img src="/static/homelab/00_intro/05_storage.jpg" alt="" style="border:15px solid #FFF" /></div>
<p>Storage is a critical component in an ideal homelab architecture, providing ample space to store data, virtual machine
images, and backups. Network Attached Storage (NAS) or Storage Area Network (SAN) solutions offer centralized storage with
high capacity and performance. NAS devices are easy to set up and provide shared file access over the network, making them
suitable for storing media, documents, and backups. SANs, on the other hand, deliver fast and reliable storage for virtual machine environments,
supporting features like RAID, snapshotting, and replication. Additionally, leveraging cloud storage services can provide off-site backups and enhance
data accessibility. By incorporating a robust storage solution into the homelab architecture, users can ensure data integrity, scalability, and
efficient management of their digital assets.</p>
<div style="float: left; width: 230px; background: white;"><img src="/static/homelab/00_intro/06_compute.jpg" alt="" style="border:15px solid #FFF" /></div>
<p>Compute is a fundamental aspect of an ideal homelab architecture, empowering users to run diverse workloads and applications. It typically revolves
around powerful server hardware, which can include rack-mounted servers or repurposed desktop machines with ample processing power, memory, and storage
capacity. Virtualization technologies like VMware or Hyper-V allow for the creation and management of virtual machines, enabling efficient utilization of
resources and the ability to run multiple operating systems and applications simultaneously. Additionally, containerization platforms such as Docker and
Kubernetes offer lightweight and scalable environments for deploying and managing containerized applications. The compute component of a homelab provides
the necessary horsepower to support a wide range of experiments, projects, and learning opportunities, empowering users to explore various technologies
and configurations.</p>
<div style="float: right; width: 230px; background: white;"><img src="/static/homelab/00_intro/07_nice_homelab.jpg" alt="" style="border:15px solid #FFF" /></div>
<p>A cool homelab is a sight to behold, with a sleek equipment rack housing powerful servers and illuminated by LED lights. Neatly managed cables run along
cable management arms and trays, with color-coded sleeves adding style and clarity. High-performance networking switches, routers, and firewalls display
blinking status lights, creating an entrancing visual display. Multiple monitors on a spacious desk provide a command center, surrounded by shelves and
storage units for additional hardware. The room is optimized for airflow and ventilation, ensuring the equipment remains cool and efficient. Whiteboards
or smart boards adorn the walls, inspiring creativity and organization. This cool homelab is a blend of functionality, organization, and aesthetic
appeal, reflecting a passion for technology and a commitment to continuous learning and exploration.</p>
<div style="float: left; width: 230px; background: white;"><img src="/static/homelab/00_intro/08_reality_mess.jpg" alt="" style="border:15px solid #FFF" /></div>
<p>Keeping things organized in a homelab can be a real challenge, but it’s crucial for maintaining efficiency and ease of use. One area that often requires
attention is cable management. With numerous devices, power cords, and networking cables, it’s easy for things to become a tangled mess. Implementing
cable management techniques such as using cable ties, Velcro straps, or cable management trays can help keep cables organized and prevent them from
becoming a jumbled mess. Additionally, labeling cables and ports can make it easier to trace connections and troubleshoot any issues that may arise.
Regular maintenance and periodic cable cleanups can go a long way in maintaining a tidy and well-organized homelab.</p>
<div style="float: right; width: 230px; background: white;"><img src="/static/homelab/00_intro/09_reality_mess.jpg" alt="" style="border:15px solid #FFF" /></div>
<p>Aside from cables, proper equipment and component organization also contribute to an organized homelab. Utilizing equipment racks or shelves can provide
a designated space for servers, switches, and other hardware. Grouping similar components together and using clear labeling or color-coding techniques
can simplify identification and access. It’s also helpful to have a central documentation system to keep track of configurations, IP addresses, and
system changes. By investing time and effort into organizing the physical and virtual aspects of the homelab, users can save valuable time during
troubleshooting, upgrades, and expansions, ensuring a smoother and more efficient homelab experience overall.</p>
<div style="float: left; width: 230px; background: white;"><img src="/static/homelab/00_intro/10_reality_cabling.jpg" alt="" style="border:15px solid #FFF" /></div>
<p>Cabling is a crucial aspect of homelab organization, and a few extra tips can help maintain a tidy setup. Planning the cabling layout in advance,
considering cable lengths and future expansions, reduces clutter. Utilizing color-coded cables or cable sleeves simplifies identification. Within server
racks or cabinets, employing cable management tools like arms, managers, and routing channels guides cables neatly and improves airflow. Regular cable
audits and cleanups remove unused cables and ensure secure connections. Documenting the cabling infrastructure with diagrams or management software aids
in troubleshooting and modifications, preventing confusion and saving time. By implementing these strategies, the homelab maintains an organized and
efficient cabling system.</p>
<h2 id="before">Before</h2>
<p>Building a homelab is all about trial and error, and things can get a bit messy and disorganized at times. It’s totally normal!
When you start setting up your homelab, you might run into issues and face configuration problems. But don’t worry, that’s how you
learn! Embrace the challenges and see them as opportunities to grow your skills. Your homelab might end up looking like a tangled
web of cables, but that’s part of the fun.</p>
<div class="col">
<h4 class="block-title">Before homelab gallery</h4>
<div class="block-body">
<ul class="item-list-round" data-magnific="gallery">
<li><a href="/static/homelab/02_building/before/00.jpg" style="background-image: url('/static/homelab/02_building/before/00.jpg');"></a></li>
<li><a href="/static/homelab/02_building/before/01.jpg" style="background-image: url('/static/homelab/02_building/before/01.jpg');"></a></li>
<li><a href="/static/homelab/02_building/before/02.jpg" style="background-image: url('/static/homelab/02_building/before/02.jpg');"></a></li>
<li><a href="/static/homelab/02_building/before/03.jpg" style="background-image: url('/static/homelab/02_building/before/03.jpg');"></a></li>
<li><a href="/static/homelab/02_building/before/04.jpg" style="background-image: url('/static/homelab/02_building/before/04.jpg');"></a></li>
<li><a href="/static/homelab/02_building/before/05.jpg" style="background-image: url('/static/homelab/02_building/before/05.jpg');"></a></li>
<li><a href="/static/homelab/02_building/before/06.jpg" style="background-image: url('/static/homelab/02_building/before/06.jpg');"></a></li>
<li><a href="/static/homelab/02_building/before/07.jpg" style="background-image: url('/static/homelab/02_building/before/07.jpg');"></a></li>
</ul>
</div>
</div>
<hr style="clear:both; border:0px solid #fff;" />
<p>Take the time to label and document everything-it’ll save you headaches down the road.
And remember, there’s a whole <a href="https://www.reddit.com/r/homelab">homelab</a> community out there
ready to help you out when things get rough!</p>
<h2 id="bom">BoM</h2>
<p>Next are the details of my enclosure homelab’s bill of materials (BoM).
Picture this, a comprehensive list of all the hardware and equipment required to build this box of awesomeness.
The BoM is like a treasure map, guiding me through the vast landscape of parts and stuff
to deep dive into the heart and soul of my homelab setup.
There are many parts not in this BoM list because they are not used in the final layout.</p>
<table>
<thead>
<tr>
<th>Item</th>
<th>Quantity</th>
<th>Price</th>
<th>Total</th>
<th>Image</th>
</tr>
</thead>
<tbody>
<tr>
<td>Countertop (240cm, cut in 2x70cm & 2x50cm)</td>
<td>1</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/01_tablero.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>T-slot 2020 profiles</td>
<td>12</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/02_t-slot.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>3 way connectors</td>
<td>8</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/03_3_way_connector.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>M4 hexagonal head screws (5mm and 25mm)</td>
<td>50/50</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/04_m3_4mmlong_screw.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>T-slot 2020 corner connectors</td>
<td>8</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/06_90_degree_connector.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>T-slot 90 degrees connectors</td>
<td>50</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/07_90_degree_connector.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>T-slot M4 nut</td>
<td>50</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/08_t-slot_nut.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>Spax wood screws 2mmx25mm</td>
<td>50</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/09_spax_2mm_screw.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>L (90 degrees) bracket 30mmx80mmx55mmx2mm</td>
<td>2</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/12_L_bracket_30X80X55X2.5MM_SIMPSON.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>Heavy 90 degrees connectors</td>
<td>50</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/13_corner.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>Heavy floor feets</td>
<td>4</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/16_feet.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>Upper door shock absorver</td>
<td>1</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/17_shock.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>Nylon wheels (1 inch), M8</td>
<td>4</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/18_wheels_1inch_nylon.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>M8 hexagon connector</td>
<td>4</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/19_m10_hexagon_connector.png" width="50" height="50" /></td>
</tr>
<tr>
<td>Washers</td>
<td>50</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/21_washers.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>Hinges</td>
<td>2</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/22_hinge.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>Glass doors (255mmx610mm)</td>
<td>2</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/23_glass_doors_5mm.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>19” rack profile (2x12U,4x2U,2x4U)</td>
<td>8</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/81_Adam_Hall_19inch_61535.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>19” rack shelf (250mm)</td>
<td>1</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/91_shelve.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>19” rack shelf (150mm)</td>
<td>1</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/92_shelve_150mm.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>19” rack power distributor</td>
<td>1</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/93_power.jpg" width="50" height="50" /></td>
</tr>
<tr>
<td>19” rack rail depth adapter kit</td>
<td>2</td>
<td>TBD</td>
<td>TBD</td>
<td><img src="/static/homelab/01_materials/95_RDA2U.jpg" width="50" height="50" /></td>
</tr>
</tbody>
</table>
<div class="col">
<h4 class="block-title">Materials gallery</h4>
<div class="block-body">
<ul class="item-list-round" data-magnific="gallery">
<li><a href="/static/homelab/01_materials/01_tablero.jpg" style="background-image: url('/static/homelab/01_materials/01_tablero.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/02_t-slot.jpg" style="background-image: url('/static/homelab/01_materials/02_t-slot.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/03_3_way_connector.jpg" style="background-image: url('/static/homelab/01_materials/03_3_way_connector.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/04_m3_4mmlong_screw.jpg" style="background-image: url('/static/homelab/01_materials/04_m3_4mmlong_screw.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/06_90_degree_connector.jpg" style="background-image: url('/static/homelab/01_materials/06_90_degree_connector.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/07_90_degree_connector.jpg" style="background-image: url('/static/homelab/01_materials/07_90_degree_connector.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/08_t-slot_nut.jpg" style="background-image: url('/static/homelab/01_materials/08_t-slot_nut.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/09_spax_2mm_screw.jpg" style="background-image: url('/static/homelab/01_materials/09_spax_2mm_screw.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/12_L_bracket_30X80X55X2.5MM_SIMPSON.jpg" style="background-image: url('/static/homelab/01_materials/12_L_bracket_30X80X55X2.5MM_SIMPSON.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/13_corner.jpg" style="background-image: url('/static/homelab/01_materials/13_corner.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/16_feet.jpg" style="background-image: url('/static/homelab/01_materials/16_feet.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/17_shock.jpg" style="background-image: url('/static/homelab/01_materials/17_shock.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/18_wheels_1inch_nylon.jpg" style="background-image: url('/static/homelab/01_materials/18_wheels_1inch_nylon.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/19_m10_hexagon_connector.png" style="background-image: url('/static/homelab/01_materials/19_m10_hexagon_connector.png');"></a></li>
<li><a href="/static/homelab/01_materials/21_washers.jpg" style="background-image: url('/static/homelab/01_materials/21_washers.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/22_hinge.jpg" style="background-image: url('/static/homelab/01_materials/22_hinge.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/23_glass_doors_5mm.jpg" style="background-image: url('/static/homelab/01_materials/23_glass_doors_5mm.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/81_Adam_Hall_19inch_61535.jpg" style="background-image: url('/static/homelab/01_materials/81_Adam_Hall_19inch_61535.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/91_shelve.jpg" style="background-image: url('/static/homelab/01_materials/91_shelve.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/92_shelve_150mm.jpg" style="background-image: url('/static/homelab/01_materials/92_shelve_150mm.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/93_power.jpg" style="background-image: url('/static/homelab/01_materials/93_power.jpg');"></a></li>
<li><a href="/static/homelab/01_materials/95_RDA2U.jpg" style="background-image: url('/static/homelab/01_materials/95_RDA2U.jpg');"></a></li>
</ul>
</div>
</div>
<hr style="clear:both; border:0px solid #fff;" />
<p>In conclusion, the bill of materials (BoM) serves as the backbone of my homelab’s main rack enclosure, providing a
detailed roadmap to its inner workings. Through extensive research, careful consideration, and a touch of geeky enthusiasm, I’ve curated a collection of hardware and equipment that forms the foundation of it.</p>
<h2 id="after-building-the-homelab">After building the homelab</h2>
<p>I’ll like to share the journey I embarked on to build my homelab.
It was no walk in the park, but oh , the end result is worth every bit of the sweat and tears.
First, I dove headfirst into the technical realm, researching hardware options like a mad scientist on a caffeine-fueled mission.
Then came the fun part-trial and error galore!. But hey, that’s how we learn, right? Finally,
I had my homelab up and running, ready to tackle any tech challenge thrown my way.
So, buckle up, folks, and get ready to witness the fruits of my homelab labor-let the geeky adventures begin!</p>
<div class="col">
<h4 class="block-title">After homelab gallery</h4>
<div class="block-body">
<ul class="item-list-round" data-magnific="gallery">
<li><a href="/static/homelab/02_building/after/00.jpg" style="background-image: url('/static/homelab/02_building/after/00.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/01.jpg" style="background-image: url('/static/homelab/02_building/after/01.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/02.jpg" style="background-image: url('/static/homelab/02_building/after/02.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/03.jpg" style="background-image: url('/static/homelab/02_building/after/03.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/04.jpg" style="background-image: url('/static/homelab/02_building/after/04.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/05.jpg" style="background-image: url('/static/homelab/02_building/after/05.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/06.jpg" style="background-image: url('/static/homelab/02_building/after/06.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/07.jpg" style="background-image: url('/static/homelab/02_building/after/07.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/08.jpg" style="background-image: url('/static/homelab/02_building/after/08.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/09.jpg" style="background-image: url('/static/homelab/02_building/after/09.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/10.jpg" style="background-image: url('/static/homelab/02_building/after/10.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/11.jpg" style="background-image: url('/static/homelab/02_building/after/11.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/12.jpg" style="background-image: url('/static/homelab/02_building/after/12.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/13.jpg" style="background-image: url('/static/homelab/02_building/after/13.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/14.jpg" style="background-image: url('/static/homelab/02_building/after/14.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/15.jpg" style="background-image: url('/static/homelab/02_building/after/15.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/16.jpg" style="background-image: url('/static/homelab/02_building/after/16.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/17.jpg" style="background-image: url('/static/homelab/02_building/after/17.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/20.jpg" style="background-image: url('/static/homelab/02_building/after/20.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/21.jpg" style="background-image: url('/static/homelab/02_building/after/21.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/22.jpg" style="background-image: url('/static/homelab/02_building/after/22.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/23.jpg" style="background-image: url('/static/homelab/02_building/after/23.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/24.jpg" style="background-image: url('/static/homelab/02_building/after/24.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/25.jpg" style="background-image: url('/static/homelab/02_building/after/25.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/26.jpg" style="background-image: url('/static/homelab/02_building/after/26.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/27.jpg" style="background-image: url('/static/homelab/02_building/after/27.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/28.jpg" style="background-image: url('/static/homelab/02_building/after/28.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/29.jpg" style="background-image: url('/static/homelab/02_building/after/29.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/30.jpg" style="background-image: url('/static/homelab/02_building/after/30.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/31.jpg" style="background-image: url('/static/homelab/02_building/after/31.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/32.jpg" style="background-image: url('/static/homelab/02_building/after/32.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/33.jpg" style="background-image: url('/static/homelab/02_building/after/33.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/34.jpg" style="background-image: url('/static/homelab/02_building/after/34.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/35.jpg" style="background-image: url('/static/homelab/02_building/after/35.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/36.jpg" style="background-image: url('/static/homelab/02_building/after/36.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/40.jpg" style="background-image: url('/static/homelab/02_building/after/40.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/41.jpg" style="background-image: url('/static/homelab/02_building/after/41.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/42.jpg" style="background-image: url('/static/homelab/02_building/after/42.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/50.jpg" style="background-image: url('/static/homelab/02_building/after/50.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/51.jpg" style="background-image: url('/static/homelab/02_building/after/51.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/52.jpg" style="background-image: url('/static/homelab/02_building/after/52.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/53.jpg" style="background-image: url('/static/homelab/02_building/after/53.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/54.jpg" style="background-image: url('/static/homelab/02_building/after/54.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/55.jpg" style="background-image: url('/static/homelab/02_building/after/55.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/60.jpg" style="background-image: url('/static/homelab/02_building/after/60.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/61.jpg" style="background-image: url('/static/homelab/02_building/after/61.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/80.jpg" style="background-image: url('/static/homelab/02_building/after/80.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/81.jpg" style="background-image: url('/static/homelab/02_building/after/81.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/82.jpg" style="background-image: url('/static/homelab/02_building/after/82.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/83.jpg" style="background-image: url('/static/homelab/02_building/after/83.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/84.jpg" style="background-image: url('/static/homelab/02_building/after/84.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/85.jpg" style="background-image: url('/static/homelab/02_building/after/85.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/86.jpg" style="background-image: url('/static/homelab/02_building/after/86.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/87.jpg" style="background-image: url('/static/homelab/02_building/after/87.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/88.jpg" style="background-image: url('/static/homelab/02_building/after/88.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/89.jpg" style="background-image: url('/static/homelab/02_building/after/89.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/90.jpg" style="background-image: url('/static/homelab/02_building/after/90.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/91.jpg" style="background-image: url('/static/homelab/02_building/after/91.jpg');"></a></li>
<li><a href="/static/homelab/02_building/after/92.jpg" style="background-image: url('/static/homelab/02_building/after/92.jpg');"></a></li>
</ul>
</div>
</div>
<hr style="clear:both; border:0px solid #fff;" />
<p>My homelab build process was an exhilarating roller coaster ride filled with learning,
challenges, and moments of pure triumph.</p>
<h2 id="showcase">Showcase</h2>
<p>After a lot of trial and error, messy cables, I’ve finally created a tech wonderland right in the comfort of my own home.
Picture this: a sleek rack filled with powerful custom machines, and blinking lights. I’ve got my own mini data center going on!
With my homelab, I can experiment with cutting-edge technologies, host my own services, and learn like a pro.
It’s my own little tech playground, and boy, it feels good to have accomplished this.
Welcome to my homelab, where the possibilities are endless.
BTW all this stuff was 100% sponsored by ME.</p>
<div class="col">
<h4 class="block-title">Showcase gallery</h4>
<div class="block-body">
<ul class="item-list-round" data-magnific="gallery">
<li><a href="/static/homelab/03_showcase/00.jpg" style="background-image: url('/static/homelab/03_showcase/00.jpg');"></a></li>
<li><a href="/static/homelab/03_showcase/01.jpg" style="background-image: url('/static/homelab/03_showcase/01.jpg');"></a></li>
<li><a href="/static/homelab/03_showcase/02.jpg" style="background-image: url('/static/homelab/03_showcase/02.jpg');"></a></li>
<li><a href="/static/homelab/03_showcase/03.jpg" style="background-image: url('/static/homelab/03_showcase/03.jpg');"></a></li>
<li><a href="/static/homelab/03_showcase/04.jpg" style="background-image: url('/static/homelab/03_showcase/04.jpg');"></a></li>
<li><a href="/static/homelab/03_showcase/05.jpg" style="background-image: url('/static/homelab/03_showcase/05.jpg');"></a></li>
</ul>
</div>
</div>
<hr style="clear:both; border:0px solid #fff;" />
<div class="center">
<video width="480" height="320" controls="controls">
<source src="/static/homelab/02_building/after/93.mp4" type="video/mp4" />
</video>
</div>
<div class="center">
<video width="480" height="320" controls="controls">
<source src="/static/homelab/02_building/after/94.mp4" type="video/mp4" />
</video>
</div>
<h2 id="status">Status</h2>
<p>At the moment some pictures of the build…</p>
<div class="col">
<h4 class="block-title">Showcase gallery</h4>
<div class="block-body">
<ul class="item-list-round" data-magnific="gallery">
<li><a href="/static/homelab/04_state/00_homelab_status_2023_08.jpg" style="background-image: url('/static/homelab/04_state/00_homelab_status_2023_08.jpg');"></a></li>
</ul>
</div>
</div>
<hr style="clear:both; border:0px solid #fff;" />
<h2 id="deploying">Deploying</h2>
<p>To deploy things here I’m using Ansible, and tools like Kubeinit, these are designed to simplify the
process of setting up and managing complex infrastructure, such as Kubernetes clusters, within a homelab
environment.</p>
<p>Let’s talk about deployment tools for homelabs, like Kubeinit. These tools are super handy when you want
to set up and manage complex stuff like Kubernetes clusters in your homelab. Kubeinit, for example, is an
awesome open-source tool that makes deploying Kubernetes a breeze. It takes care of all the nitty-gritty
installation and configuration details, so you don’t have to stress about it. Plus, it lets you customize
your cluster to fit your needs. And the best part? There’s a friendly community around tools like Kubeinit
that’s always ready to lend a hand and share their wisdom.</p>
<p>Kubeinit is an open-source deployment tool specifically tailored for setting up Kubernetes clusters.
It aims to provide an easy and reproducible way to deploy Kubernetes in different configurations,
such as single-node or multi-node clusters. Kubeinit automates the installation process, handles configuration
management, and assists in deploying additional services and tools commonly used with Kubernetes.</p>
<p>Deployment tools like Kubeinit can be highly beneficial for homelab enthusiasts looking to leverage Kubernetes
and containerization technologies. They simplify the setup process, reduce manual work, and provide a standardized
approach to deploying and managing Kubernetes clusters within a homelab environment.</p>
<p><img src="/static/homelab/00_intro/deploy.jpg" alt="" /></p>
<h2 id="update-log">Update log:</h2>
<div style="font-size:10px">
<blockquote>
<p><strong>2023/05/26:</strong> Initial version.</p>
<p><strong>2023/10/04:</strong> Status gallery.</p>
</blockquote>
</div>
Agile 101 +JiraAgile methodologies are a set of iterative and incremental approaches to software development that prioritize collaboration, flexibility, and rapid response to change. NOTE: WIP document subject to changes. Agile 101 - DFG:Upgrades (migrations, adoption, and backup and recovery) Benefits Some...2023-03-23T00:00:00+00:00https://www.pubstack.com/blog/2023/03/23/Agile-101Carlos Camacho<p>Agile methodologies are a set of iterative and incremental approaches to software development that prioritize collaboration, flexibility, and rapid response to change.</p>
<blockquote>
<p><strong><em>NOTE:</em></strong> WIP document subject to changes.</p>
</blockquote>
<h2 id="agile-101---dfgupgrades-migrations-adoption-and-backup-and-recovery">Agile 101 - DFG:Upgrades (migrations, adoption, and backup and recovery)</h2>
<h3 id="benefits">Benefits</h3>
<p>Some benefits include:</p>
<ul>
<li>
<p>Increased flexibility: Agile methodologies enable teams to respond quickly to changes in requirements, timelines, or resources. Teams can adjust their approach based on feedback from stakeholders or changes in the market.</p>
</li>
<li>
<p>Better collaboration: Agile methodologies emphasize communication and collaboration between team members, stakeholders, and customers. This leads to better alignment and understanding of project goals and expectations.</p>
</li>
<li>
<p>Improved quality: Agile methodologies prioritize continuous testing and feedback, leading to higher-quality software products.</p>
</li>
<li>
<p>Faster time-to-market: Agile methodologies help teams deliver software products faster and more frequently. This enables teams to respond quickly to changing market conditions and customer needs.</p>
</li>
<li>
<p>Increased customer satisfaction: By prioritizing collaboration, feedback, and responsiveness, agile methodologies help teams deliver products that better meet customer needs and expectations.</p>
</li>
</ul>
<p>Also, Agile methodologies improve team performance and vision by promoting:</p>
<ul>
<li>
<p>Transparency: Agile methodologies encourage teams to be transparent about their progress, challenges, and goals. This helps teams stay aligned and focused on the most important priorities.</p>
</li>
<li>
<p>Continuous improvement: Agile methodologies encourage teams to reflect on their processes and outcomes and make changes based on feedback. This helps teams continually improve and innovate.</p>
</li>
<li>
<p>Empowerment: Agile methodologies empower team members to take ownership of their work, make decisions, and collaborate with others. This leads to higher levels of engagement, creativity, and job satisfaction.</p>
</li>
</ul>
<p>Agile methodologies are a powerful approach to software development that promote collaboration, flexibility, and continuous improvement. They can help teams deliver higher-quality software products faster, while also increasing customer satisfaction and team morale.</p>
<h3 id="core-values">Core values</h3>
<p>The Agile Manifesto outlines four core values that underpin all agile methodologies:</p>
<ol>
<li>
<p>Individuals and interactions over processes and tools: This value emphasizes the importance of people and collaboration in the software development process. Agile methodologies prioritize face-to-face communication, feedback, and collaboration among team members and stakeholders over following rigid processes or relying on tools.</p>
</li>
<li>
<p>Working software over comprehensive documentation: This value prioritizes delivering working software over spending time and resources on extensive documentation. Agile methodologies emphasize the importance of frequent releases and testing to ensure that software is functional and meets user needs.</p>
</li>
<li>
<p>Customer collaboration over contract negotiation: This value emphasizes the importance of working closely with customers and stakeholders throughout the software development process. Agile methodologies prioritize customer feedback and input to ensure that the final product meets their needs and expectations.</p>
</li>
<li>
<p>Responding to change over following a plan: This value recognizes that the software development process is inherently unpredictable and that plans may need to change. Agile methodologies prioritize flexibility and the ability to respond quickly to changes in requirements, timelines, or resources.</p>
</li>
</ol>
<p>These core values guide the behavior and decision-making of agile teams and help them deliver high-quality software products that meet customer needs and expectations.</p>
<h3 id="principles">Principles</h3>
<ol>
<li>
<p>Customer satisfaction: Delivering value to the customer is the highest priority.</p>
</li>
<li>
<p>Changing requirements: Agile processes are flexible and can adapt to changing requirements, even in the later stages of development.</p>
</li>
<li>
<p>Delivering frequently: Frequent delivery of working software builds trust, encourages feedback, and enables the customer to realize benefits earlier.</p>
</li>
<li>
<p>Collaboration: Agile processes emphasize collaboration between customers, developers, and stakeholders to ensure the best outcome.</p>
</li>
<li>
<p>Motivated individuals: Teams should be composed of self-motivated individuals who are empowered to make decisions and work collaboratively.</p>
</li>
<li>
<p>Face-to-face communication: Communication is key, and face-to-face communication is the most effective way to convey information.</p>
</li>
<li>
<p>Working software: Working software is the primary measure of progress.</p>
</li>
<li>
<p>Sustainable development: Agile processes promote sustainable development, with a focus on maintaining a steady pace and avoiding burnout.</p>
</li>
<li>
<p>Technical excellence: A strong focus on technical excellence is necessary to maintain quality and enable agility.</p>
</li>
<li>
<p>Simplicity: Simplicity is a key aspect of agile development, with a focus on delivering the simplest possible solution that meets customer needs.</p>
</li>
<li>
<p>Self-organizing teams: Teams should be self-organizing, with the ability to adapt to changing requirements and make decisions.</p>
</li>
<li>
<p>Reflection and adaptation: Agile processes promote reflection and adaptation, with a focus on continuous improvement and learning from experience.</p>
</li>
</ol>
<h3 id="roles">Roles</h3>
<p>The main roles in agile are:</p>
<ul>
<li>
<p>Product Owner: The product owner is responsible for defining and prioritizing the features of the product or service being developed. They are the primary point of contact for the development team and are responsible for communicating the vision and goals of the product to the team.</p>
</li>
<li>
<p>Scrum Master: The scrum master is responsible for ensuring that the development team follows the agile process and for facilitating communication and collaboration within the team. They help remove obstacles and ensure that the team is working efficiently and effectively.</p>
</li>
<li>
<p>Development Team: The development team is responsible for designing, developing, testing, and delivering the product or service being developed. They work together to complete the tasks necessary to meet the project goals.</p>
</li>
<li>
<p>Stakeholders: Stakeholders are individuals or groups with an interest in the project, such as customers, users, or managers. They provide feedback and input to the product owner and development team to ensure that the product meets their needs and expectations.</p>
</li>
<li>
<p>Agile Coach: An agile coach is a mentor who helps teams adopt and implement agile methodologies. They provide guidance and support to the team and help them identify areas for improvement.</p>
</li>
<li>
<p>Business Analyst: A business analyst is responsible for understanding and documenting the requirements of the product or service being developed. They work closely with the product owner to ensure that the development team understands the project goals and requirements.</p>
</li>
<li>
<p>Quality Assurance (QA) Engineer: A QA engineer is responsible for testing the product or service to ensure that it meets quality standards and user requirements. They work closely with the development team to identify and resolve defects.</p>
</li>
</ul>
<p>The specific roles and responsibilities may vary depending on the particular agile methodology being used and the needs of the project.</p>
<h3 id="agile-components">Agile components</h3>
<p>The six key components that make up the Agile methodology:</p>
<ul>
<li>
<p>User Stories: User stories are brief, non-technical descriptions of a feature or requirement from the user’s perspective. They are used to capture user requirements and help the development team understand what the user wants.</p>
</li>
<li>
<p>Backlog: The product backlog is a prioritized list of user stories or features that need to be developed. It is managed by the product owner and is used to guide the development team’s work.</p>
</li>
<li>
<p>Sprints: Sprints are short, time-boxed iterations in which the development team works to deliver a set of features or user stories. Sprints typically last 1-4 weeks, and at the end of each sprint, the team delivers a potentially shippable product increment.</p>
</li>
<li>
<p>Daily Stand-ups: Daily stand-ups are brief, daily meetings in which the development team discusses progress, identifies obstacles, and plans the day’s work. They are intended to keep the team informed and ensure that everyone is working towards the same goals.</p>
</li>
<li>
<p>Sprint Review: The sprint review is a meeting held at the end of each sprint to demonstrate the work that has been completed to the stakeholders. It is an opportunity for the team to get feedback on the product and make any necessary adjustments.</p>
</li>
<li>
<p>Sprint Retrospective: The sprint retrospective is a meeting held at the end of each sprint to review the team’s process and identify areas for improvement. The team reflects on what went well and what didn’t, and then makes adjustments for the next sprint.</p>
</li>
</ul>
<p>These six components work together to guide the agile process, from capturing user requirements through to delivering a high-quality product increment at the end of each sprint.</p>
<h3 id="agile-phases">Agile Phases</h3>
<p>While there are different variations of the Agile methodology, most include the following phases:</p>
<ul>
<li>
<p>Project Initiation: In this phase, the team identifies the scope of the project, the stakeholders involved, and the objectives and goals of the project. The team also establishes the project’s vision and creates a product roadmap to guide the development process.</p>
</li>
<li>
<p>Planning and Requirements Analysis: In this phase, the team identifies the user requirements and translates them into user stories or features. The product backlog is created and prioritized based on customer needs and business objectives. The team also creates a project plan, which outlines the timeline, resources, and budget required to complete the project.</p>
</li>
<li>
<p>Design and Prototyping: In this phase, the team designs the architecture of the system and creates a prototype of the product. The design and prototype are reviewed and tested to ensure they meet the user requirements and project goals.</p>
</li>
<li>
<p>Development and Iterations: In this phase, the team begins to develop the product incrementally through a series of iterations or sprints. Each iteration typically lasts 1-4 weeks and results in a working product increment. The team conducts regular sprint reviews and retrospectives to evaluate progress and identify areas for improvement.</p>
</li>
<li>
<p>Testing: In this phase, the team tests the product to ensure that it meets the user requirements and quality standards. Testing is conducted throughout the development process and includes unit testing, integration testing, and acceptance testing.</p>
</li>
<li>
<p>Deployment and Maintenance: In this phase, the team deploys the product to the production environment and provides ongoing maintenance and support. The team continues to refine the product based on user feedback and implements updates and improvements as necessary.</p>
</li>
</ul>
<p>It’s important to note that Agile methodology is iterative and flexible, and these phases are not necessarily sequential. The team can move back and forth between phases as needed to respond to changes and adjust the product as it evolves.</p>
<h2 id="delivery-focused-group-dfg-internal-mechanics">Delivery Focused Group (DFG) internal mechanics</h2>
<p>A delivery focused group is responsible end-to-end for the delivery of the stories
(includes improvements that can’t be characterized as features, delivering new upstream
releases, bug fixing, errata, test coverage improvements, working with support and the field, etc…) in its functional scope. The group is cross-functional, regrouping product managers, engineers and quality engineers to achieve the delivery.</p>
<p>Groups will cross organizational boundaries, but also technology component boundaries. For example a group in charge of developing a feature will also take care of the installation, update and upgrade of this feature using the framework provided by the deployment tooling group. The group is responsible for the product implementation: its members determine the technical solution in their area of concern. They are maintaining one backlog for each group. The delivery focused groups should be small (5 to 9), and our goal is to aim to self-organisation. You can only be on one team at a time.</p>
<h3 id="organization-of-the-dfgupgrades-squads">Organization of the DFG:Upgrades (squads)</h3>
<p>The DFG:Upgrades provides and maintain a framework for migrating, updating or upgrading from an earlier version of Red Hat OpenStack (OSP) version to an equal or later version.</p>
<p>There are three squads in this DFG, each of them with an specific core mission.</p>
<table>
<thead>
<tr>
<th>Updates</th>
<th>Upgrades</th>
<th>Migrations, adoption, and backup and recovery</th>
</tr>
</thead>
<tbody>
<tr>
<td>…</td>
<td>…</td>
<td>OpenStack migration tooling (OS migrate)</td>
</tr>
<tr>
<td>…</td>
<td>…</td>
<td>Next-gen adoption framework</td>
</tr>
<tr>
<td>…</td>
<td>…</td>
<td>Backup and recovery strategies for updates and upgrades</td>
</tr>
</tbody>
</table>
<h3 id="agile-tooling-for-the-squad">Agile tooling for the squad</h3>
<h4 id="main-jira-plan">Main Jira plan</h4>
<p>A Jira plan is a high-level view of a project’s roadmap, timelines, and milestones. It is a visual representation of the project plan that allows teams to see the big picture and understand how different tasks and issues fit together. Jira plans can include details such as start and end dates, priorities, dependencies, and progress indicators. They are often used to communicate project status and progress to stakeholders, and can help teams stay aligned and focused on project goals. Jira plans can be created and managed within the Jira software, and are often used in conjunction with other Jira tools such as boards and dashboards. It allows:</p>
<p>Resource allocation: Jira Plan allows teams to allocate resources effectively by identifying tasks and assigning them to specific team members.</p>
<p>Time management: Jira Plan helps teams manage their time more efficiently by setting deadlines and prioritizing tasks.</p>
<p>Budgeting: Jira Plan can be used to monitor project budgets and ensure that resources are being used effectively.</p>
<p><img src="/static/agile_101/plan.png" alt="" /></p>
<p>In particular the above image shows the <a href="https://issues.redhat.com/secure/PortfolioPlanView.jspa?id=1988&sid=1996#plan/backlog">Jira plan</a> for OSP, including a time-based view of the roadmap, timelines, and milestones.</p>
<h4 id="main-jira-board">Main Jira board</h4>
<p>A Jira board is a visual representation of a team’s work that allows team members to track and manage their tasks and issues in real-time. Jira boards can be customized to reflect a team’s unique workflow and can include different views, such as a Kanban board or a Scrum board.</p>
<p>In a Kanban board, tasks are represented as cards that move across different columns that reflect different stages of the workflow, such as “To Do,” “In Progress,” and “Done.” The columns can be customized to match the team’s specific workflow, and team members can easily see the status of each task and identify any bottlenecks or areas for improvement.</p>
<p>In a Scrum board, tasks are represented as cards that are grouped into sprints, which are typically two-week time periods in which the team works to complete specific goals. The Scrum board includes columns for the different stages of the sprint, such as “To Do,” “In Progress,” “Code Review,” and “Done,” and team members can track the progress of each task and see how it fits into the larger sprint goal.</p>
<p>Jira boards can be used by development teams, project managers, and other stakeholders to manage and track work in real-time, and to ensure that everyone is aligned and focused on the team’s goals.</p>
<p>Visualization: Jira Board provides a visual representation of tasks and project progress, making it easier for team members to see what’s been done and what still needs to be done.</p>
<p>Collaboration: Jira Board facilitates collaboration and communication between team members, allowing them to easily see what others are working on and identify areas where they can help.</p>
<p>Prioritization: Jira Board helps teams prioritize tasks and focus on what’s most important, ensuring that the team is working on the right things at the right time.</p>
<p><img src="/static/agile_101/board_main.png" alt="" /></p>
<p>In particular the previous image, the <a href="https://issues.redhat.com/secure/RapidBoard.jspa?rapidView=16007&view=reporting&chart=burndownChart&sprint=49387">Jira DFG:Upgrades board link</a> shows the whole team work.</p>
<h4 id="squad-board">Squad board</h4>
<p>The squad board is represented by a subset of the filters used for the main DFG Jira board. This can help different squads part of the main DFG to focus on specific subsets of work, and to avoid being overwhelmed by too much information.</p>
<p>For example, a DFG might have a main Jira board that includes all of their projects and issues, but they may also have sub-boards for each squad that focus on specific projects or workflows. These sub-boards would only include the filters and columns that are relevant to that specific project or workflow, allowing team members to focus on the work that is most important to them.</p>
<p>Sub-boards can also be used to create more detailed views of specific issues or sets of issues. For example, a team might create a sub-board that shows all of the issues related to a specific feature or bug fix, allowing them to easily track progress and collaborate on solutions.</p>
<p>Managing sub-boards might be a useful tool for organizing and managing work in Jira, and can help teams to stay focused and productive.</p>
<p><img src="/static/agile_101/board_secondary.png" alt="" /></p>
<p>In particular the previous image, the <a href="https://issues.redhat.com/secure/RapidBoard.jspa?rapidView=16774&view=reporting&chart=burndownChart&sprint=49387">Jira DFG:Upgrades squad sub-board link</a> shows a sub-set of all the tasks for the team, so the squad can be focused on their specific planned work. It is important to show that this view gives us the ability to compare the overall team’s progress with respect of each squad.</p>
<h4 id="squad-dashboard">Squad dashboard</h4>
<p>A Jira dashboard is a customizable, visual display of key metrics and information from Jira projects and issues. It is designed to provide users with a high-level overview of project status, progress, and key performance indicators (KPIs).</p>
<p>Jira dashboards can be configured to display different types of data, such as progress of individual projects, workload of team members, and number of open and closed issues. They can also include custom widgets and gadgets, such as charts, graphs, and filters, that allow users to drill down into specific data and see details about individual issues or projects.</p>
<p>Jira dashboards can be personalized to meet the needs of individual users, teams, or departments, and can be shared across the organization to ensure everyone is aligned and working towards the same goals. They are a powerful tool for project managers and team members to stay on top of their work and make informed decisions based on real-time data. In particular, they allow:</p>
<p>Performance tracking: Jira Dashboards provide real-time insights into project performance, enabling teams to identify areas for improvement and track progress against goals.</p>
<p>Customization: Jira Dashboards can be customized to display the information that’s most important to each team member, helping to streamline workflows and improve productivity.</p>
<p>Decision making: Jira Dashboards provide the information needed to make informed decisions, helping teams to adjust course and make changes as needed.</p>
<p><img src="/static/agile_101/dashboard.png" alt="" /></p>
<p>The previous image shows the <a href="https://issues.redhat.com/secure/Dashboard.jspa?selectPageId=12349106">Jira DFG:Upgrades migrations dashboard</a> where it can be seen:</p>
<ul>
<li>The remaining days of the sprint.</li>
<li>The people involved in the squad tasks.</li>
<li>The effort allocation for the remaining tasks planned for the sprint.</li>
<li>The current open tasks for the sprint.</li>
<li>The current closed tasks for the sprint.</li>
<li>The already groomed list of tasks that will be planned for the next sprint.</li>
<li>Both burn-down charts for the whole team and the squad.</li>
<li>The list of epics that should be worked on.</li>
</ul>
<h4 id="sprint-timeline">Sprint Timeline</h4>
<p>As the sprint moves forward there are a set of activities that are handled by all the team members.</p>
<table>
<thead>
<tr>
<th>Sprint Planning</th>
<th>Sprint Execution</th>
<th>Sprint Review</th>
<th>Sprint Retrospective</th>
</tr>
</thead>
<tbody>
<tr>
<td>Determine scope and objectives of the sprint.</td>
<td>Develop and test user stories.</td>
<td>Demo the product to stakeholders and gather feedback.</td>
<td>Reflect on the sprint and identify areas for improvement.</td>
</tr>
<tr>
<td>Create a sprint backlog.</td>
<td>Conduct daily stand-up meetings.</td>
<td>Review and prioritize backlog for next sprint.</td>
<td>Discuss what went well, what could have been better, and how to improve.</td>
</tr>
<tr>
<td>Define user stories and acceptance criteria.</td>
<td>Collaborate and communicate with team members.</td>
<td>Analyze metrics and data to identify areas for improvement.</td>
<td>Assign action items and determine a plan for implementing changes.</td>
</tr>
<tr>
<td>Estimate the effort required for each user story.</td>
<td>Conduct user testing and integrate feedback.</td>
<td>Celebrate successes and recognize team members for their contributions.</td>
<td>Plan for the next sprint and make adjustments as needed.</td>
</tr>
</tbody>
</table>
<h4 id="constant-grooming-and-backlog-ranking">Constant grooming and backlog ranking</h4>
<p>It might be considered a good practice to do constant grooming and backlog ranking in agile development.</p>
<p>Grooming, also known as backlog refinement, is the process of reviewing and updating the product backlog to ensure that it is up-to-date and prioritized. This involves reviewing user stories, breaking them down into smaller tasks, and estimating the effort required to complete each task. By doing this regularly, the team can ensure that the backlog is accurate and up-to-date, and that everyone understands the scope of the work that needs to be done.</p>
<p>Backlog ranking is the process of prioritizing user stories and tasks based on their importance and value to the product. This involves considering factors such as customer needs, business goals, and technical dependencies, and assigning each item in the backlog a priority level. By doing this regularly, the team can ensure that they are always working on the most important and valuable items, and that they are delivering value to the customer in a timely manner.</p>
<p><img src="/static/agile_101/grooming.png" alt="" /></p>
<p>The previous image shows in each sprint how the process of constantly revising the backlog and ranking the tasks takes place.</p>
<p>By doing constant grooming and backlog ranking, the team can stay focused on the most important work, and can ensure that they are delivering value to the customer in a timely manner. This helps to ensure that the project stays on track, and that the team is able to deliver a high-quality product that meets the needs of the customer.</p>
<h2 id="some-final-conclusions">Some final conclusions</h2>
<p>Agile is an iterative and flexible approach to software development that emphasizes collaboration, customer satisfaction, and continuous improvement. It consists of a set of core values, such as valuing individuals and interactions over processes and tools, and 12 principles, such as delivering working software frequently and embracing change.</p>
<p>Agile methodology is made up of six key components, including user stories, backlog, sprints, daily stand-ups, sprint review, and sprint retrospective. These components work together to guide the agile process, from capturing user requirements through to delivering a high-quality product increment at the end of each sprint.</p>
<p>By embracing agile methodologies, teams can achieve a variety of benefits, such as increased productivity, better collaboration, improved quality, faster time-to-market, and greater customer satisfaction. By focusing on continuous improvement and adapting to change, agile teams can remain responsive to customer needs and competitive in an ever-changing market.</p>
<p>Jira is a popular project management tool that can be used to support agile development. It includes a number of features, such as Jira plans, boards, and dashboards, that allow teams to plan, track, and manage their work in a flexible and collaborative way.</p>
<p>Jira plans are high-level roadmaps that help teams to visualize their work over time, and to track progress towards their goals. Jira boards are visual representations of the work that needs to be done, and can be customized to reflect different workflows and priorities. Jira dashboards are customizable views that provide users with a high-level overview of project status and progress, and can be personalized to meet the needs of individual users or teams.</p>
<p>Some of the benefits of using Jira for agile development include improved collaboration, increased transparency, and better visibility into project status and progress. By using Jira, teams can stay organized, focused, and productive, and can deliver high-quality software products that meet the needs of their customers.</p>
<p>Finally, constant grooming and backlog ranking are important practices in agile development that help teams to stay focused on the most important work, and to deliver value to the customer in a timely manner. By regularly reviewing and updating the backlog, teams can ensure that they are working on the most important and valuable items, and can adjust their plans as needed to stay on track.</p>
ChatGPT a world breaker technologyChatGPT is a large language model developed by OpenAI. It is a variant of the GPT (Generative Pre-training Transformer) model, which is trained on a massive amount of text data and is capable of generating human-like text. Benefits of ChatGPT...2022-12-10T00:00:00+00:00https://www.pubstack.com/blog/2022/12/10/chatgptCarlos Camacho<p>ChatGPT is a large language model developed by OpenAI.
It is a variant of the GPT (Generative Pre-training Transformer) model,
which is trained on a massive amount of text data and is capable of
generating human-like text.</p>
<h2 id="benefits-of-chatgpt">Benefits of ChatGPT</h2>
<ul>
<li>ChatGPT’s ability to generate human-like text makes
it useful in a variety of applications such as chatbots,
language translation, and text summarization.</li>
<li>ChatGPT can be fine-tuned on specific tasks, such as
answering questions, providing information, or composing
coherent and grammatically correct text.</li>
<li>ChatGPT’s ability to understand and respond to context
allows it to generate text that is relevant to the input it receives.</li>
</ul>
<h2 id="potential-usage-of-chatgpt">Potential usage of ChatGPT</h2>
<ul>
<li>Chatbots for customer service, virtual assistance, and e-commerce.</li>
<li>Generating text for social media, email, and messaging platforms.</li>
<li>Improving language translation and summarization systems.</li>
<li>Helping with content creation, such as composing articles, stories, or poetry.</li>
<li>Generating responses in virtual reality and gaming
and many more.</li>
</ul>
<h2 id="technical-details">Technical details</h2>
<ul>
<li>ChatGPT is based on the transformer architecture.</li>
<li>It is trained on a massive amount of text data, which allows it to generate human-like text.</li>
<li>The cost to run ChatGPT will depend on the specific use case and the hardware it is running on.</li>
<li>The cost can be reduced by using the model in a “serverless” fashion, where the model is hosted by the provider and you only pay for the computation you use.</li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>ChatGPT is a powerful and versatile language model that can be used to improve a wide range of natural language processing tasks. Its ability to generate human-like text, understand context, and perform various language-based tasks makes it an attractive option for developers and researchers looking to build advanced language-based applications.</p>
<p>This article was generated with ChatGPT.</p>
Deploying a Kubernetes cluster with Windows containers supportWorkloads running on top of Windows-based infrastructure still represent a huge opportunity for ‘the cloud’, specific applications like video-games development (Unity) rely heavily on the Microsoft Windows ecosystem to work and be used among developers and customers. Hence the need...2022-06-30T00:00:00+00:00https://www.pubstack.com/blog/2022/06/30/Kubernetes-cluster-with-Windows-containers-supportCarlos Camacho<p>Workloads running on top of Windows-based infrastructure still
represent a huge opportunity for ‘the cloud’, specific applications
like video-games development (Unity) rely heavily on the Microsoft Windows ecosystem
to work and be used among developers and customers. Hence the need
to provide a consistent cloud infrastructure for such Windows based software.</p>
<h1 id="tldr">TL;DR;</h1>
<p>This post will show you how to deploy a Kubernetes cluster with
Windows containers support, and what to expect from it.</p>
<h2 id="the-good-the-bad-and-the-ugly">The good, the bad and the ugly</h2>
<p>From what is available in the documentation, the support for Windows workloads
started in Kubernetes 1.5 (2017 ‘alfa’), and as a stable build on K8s 1.14 (2019),
but even at the moment of writing this post, all the documentation, the
container runtime support, and the CNI connectivity supporting this type of
specific workloads is fairly limited. Before showing the actual 1-command deployment
magic, this post will go through an brief review about what was needed, and what was
done to have this ‘working’.</p>
<p>The following sections are an initial assessment of what is working, and how easy is to
integrate these functional components in a more fashioned and automated way using Kubeinit.</p>
<h3 id="the-good">The good</h3>
<p>It works!!!. With some specific limitations about the container runtimes that are
supported, and the CNI plugins that have support for topologies like vxlan tunnels
you can have something working once you know what is currently supported for your distro.</p>
<p>There are a lot of resources spread on the Internet that will give you an idea of what
should work, and in some cases how-to deploy it.</p>
<h4 id="useful-links">Useful links:</h4>
<p>This is the partial list of resources checked to finish the integration of Windows workloads
in Kubeinit:</p>
<p>Blog posts and documentation:</p>
<ul>
<li>https://www.jamessturtevant.com/posts/Windows-Containers-on-Windows-10-without-Docker-using-Containerd/</li>
<li>https://github.com/lippertmarkus/vagrant-k8s-win-hostprocess</li>
<li>https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/common-problems</li>
<li>https://techcommunity.microsoft.com/t5/networking-blog/introducing-kubernetes-overlay-networking-for-windows/ba-p/363082</li>
<li>https://deepkb.com/CO_000014/en/kb/IMPORT-4bb99d54-1582-32fa-b130-b496089f7678/guide-for-adding-windows-nodes-in-kubernetes</li>
</ul>
<p>Official code from Kubernetes:</p>
<ul>
<li>https://github.com/kubernetes/kubernetes/issues/94924</li>
<li>https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/windows/k8s-node-setup.psm1</li>
</ul>
<p>Code from the CNI Microsoft team:</p>
<ul>
<li>https://github.com/Microsoft/SDN/blob/master/Kubernetes/windows/start-kubelet.ps1</li>
<li>https://github.com/microsoft/SDN/blob/master/Kubernetes/windows/helper.psm1</li>
<li>https://github.com/microsoft/SDN/tree/master/Kubernetes/flannel/overlay</li>
<li>https://github.com/microsoft/SDN/blob/master/Kubernetes/flannel/register-svc.ps1</li>
</ul>
<p>Code from the Windows K8s sig:</p>
<ul>
<li>https://github.com/kubernetes-sigs/sig-windows-dev-tools</li>
<li>https://github.com/kubernetes-sigs/sig-windows-tools/issues/128</li>
</ul>
<p>Code from the CNI dev team:</p>
<ul>
<li>https://github.com/containernetworking/plugins/blob/main/plugins/main/windows/win-overlay/sample-v2.conf</li>
</ul>
<h3 id="the-bad">The bad</h3>
<p>It is pretty cumbersome to catch all the steps, ordering, and combinations of services
that actually work. Also, the support for different distributions might lead to being
forced to use i.e. container runtimes which at the moment are just not supported.</p>
<h3 id="the-ugly">The ugly</h3>
<p>While reading about what to install and how to configure I ended up with situations like the next one.
What’s the difference between sdnoverlay from
<a href="https://github.com/microsoft/windows-container-networking/releases/download/v0.3.0/windows-container-networking-cni-amd64-v0.3.0.zip">microsoft/windows-container-networking</a>
and winoverlay from
<a href="https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-windows-amd64-v1.1.1.tgz">containernetworking/plugins</a>
remains unknown to me (mostly because of my limited time to dig into it with more detail), they are supposed
to do the same, but there are plugins maintained by different folks with different names,
so it created a little bit (or too much) confusion for me when doing this initial integration.</p>
<p>We assume by default that the components that can be consumed in their “stable” releases will just work.
An example of this “not happening” is this, by default, I thought that the CNI plugin winoverlay will
work “as-is” in Vanilla Kubernetes, because this same configuration is supported and working on OpenSHift and OKD.
This assumption might not be true, at the moment of writing this post (July 1st, 2022), the support for containerD and Windows compute nodes in this
<a href="https://github.com/containernetworking/plugins/pull/725">GitHub PR</a> is merged, but not released upstream (v1.1.1 does not have this change)
so whatever you pull from the released versions will just won’t work…</p>
<h2 id="architectural-considerations-and-deployment">Architectural considerations and deployment</h2>
<p>Before going ahead, this section will briefly introduce Kubeinit’s architectural reference
so you know ahead what is deployed, where, and how.</p>
<p><img src="/static/kubeinit/arch/kubeinit_network_legend.png" alt="" />
The picture above shows the legend of the main functional components of the platform.</p>
<p>The left icon represents the services pod, that will run all the infrastructure services required to have the cluster up and running, services like HAProxy, Bind, and a local container registry among others that will support the cluster external required services.</p>
<p>The right icon represents the Virtual Machines instances that will host each node of the cluster, these nodes can be, the control-plane nodes and the worker nodes, the control-plane nodes will be Linux, and depending on the Kubernetes distribution they will have installed CentOS Stream, Debian, Fedora CoreOS, or CoreOS. The worker nodes will have the same Linux distribution as the control-plane nodes with the addition of the recently added Windows worker nodes.</p>
<p><img src="/static/kubeinit/arch/kubeinit_network_physical.png" alt="" /></p>
<p>Now, we have a representation of the physical topology of a deployed cluster, the requirement is to have a set of hypervisors where the cluster nodes (guests) will be deployed, these hypervisors must be connected in a way that they can all reach each others (or having them connected in a L2 segment). Another assumed-by-default requirement is to have
SSH passwordless access to the nodes where you are calling Ansible from.</p>
<p><img src="/static/kubeinit/arch/kubeinit_network_logical.png" alt="" /></p>
<p>The logical topology represents a fairly more detailed view about how the components are actually ‘connected’
by allowing all the guests in the cluster (including the services pod) to be reachable without any distinction
of what is deployed where within the cluster.</p>
<p>In this particular case, we have installed OVS in each hypervisor and by using OVN we create an internal overlay
network to provide a consistent and uniform way to access any cluster resource.</p>
<h3 id="latest-kubeinits-support-for-windows-workloads">Latest Kubeinit’s support for Windows workloads</h3>
<p>With some context of what will be deployed from the previous sections, let’s go ahead
and test this awesome cool feature in a magical 1-command deployment.</p>
<blockquote>
<p>NOTE: Please check the complete instructions from the main
<a href="https://github.com/Kubeinit/kubeinit#readme">README</a> page,
also, a good reference to know the hypervisor requirements is the
<a href="https://github.com/Kubeinit/kubeinit/blob/main/ci/install_gitlab_node.sh">CI install script</a>
there you will find what is required to set up the hypervisors based on Debian/Ubuntu/CentOS Stream/Fedora.</p>
</blockquote>
<h3 id="deploying">Deploying</h3>
<pre><code class="language-bash"># Install the requirements assuming python3/pip3 is installed
pip3 install \
--upgrade \
pip \
shyaml \
ansible \
netaddr
# Get the project's source code
git clone https://github.com/Kubeinit/kubeinit.git
cd kubeinit
# Install the Ansible collection requirements
ansible-galaxy collection install --force --requirements-file kubeinit/requirements.yml
# Build and install the collection
rm -rf ~/.ansible/collections/ansible_collections/kubeinit/kubeinit
ansible-galaxy collection build kubeinit --verbose --force --output-path releases/
ansible-galaxy collection install --force --force-with-deps releases/kubeinit-kubeinit-`cat kubeinit/galaxy.yml | shyaml get-value version`.tar.gz
# Run the deployment
ansible-playbook \
--user root \
-v \
-e kubeinit_spec="k8s-libvirt-1-1-1" \
-e kubeinit_libvirt_cloud_user_create=true \
-e hypervisor_hosts_spec='[[ansible_host=nyctea],[ansible_host=tyto]]' \
-e cluster_nodes_spec='[[when_group=compute_nodes,os=windows]]' \
-e compute_node_ram_size=16777216 \
./kubeinit/playbook.yml
</code></pre>
<p>If you will like to cleanup your environment after using Kubeinit, just run the same
deployment command appending <code>-e kubeinit_stop_after_task=task-cleanup-hypervisors</code>
that will clean all the resources deployed by the installer.</p>
<blockquote>
<p>NOTE: If you have an error when deploying the services pod like
‘TASK [kubeinit.kubeinit.kubeinit_services : Install python3] … unreachable Could not resolve host: mirrorlist.centos.org’
make sure your DNS server is reachable, by default it is used the 1.1.1.1
DNS server from Google, and it might be blocked in your internal network, run
<code>export KUBEINIT_COMMON_DNS_PUBLIC=<your valid DNS></code> and
then run the deployment as usual.</p>
</blockquote>
<p>The ‘new’ parameter present when running the Ansible playbook is called
<code>cluster_nodes_spec</code>, this parameter allows to determine
the OS (Operative System) of the compute nodes. There are still in progress some features
to fully allow to customize the setup and decide how many nodes will be Windows based and how many will
have Linux installed.
The only supported version is Windows server 2022 Datacenter Edition.
This Windows Server installation is based on the actual .ISO installer, so if a user want’s
to use this in a more stable scenario they will need to register these cluster’s nodes.</p>
<p>Once the deployment finishes, in the case of having Windows compute nodes it
will be around ~50 minutes (at least the first time because we need to download all the .ISO images),
you can VNC into your Windows compute nodes firstly by forwarding the 5900 port like:</p>
<pre><code class="language-bash">[ccamacho@laptop]$ ssh root@nyctea -L 5900:127.0.0.1:5900
</code></pre>
<p>Then, from your workstation start a VNC session to 127.0.0.1:5900.</p>
<p>Or even better, you can SSH directly into your Windows computes like:</p>
<pre><code class="language-bash"># This will get you into the first controller node.
[ccamacho@nyctea]$ ssh -i .ssh/k8scluster_id_rsa root@10.0.0.1
# This will get you to your first compute node.
[ccamacho@nyctea]$ ssh -i .ssh/k8scluster_id_rsa root@10.0.0.2
# This will get you to the services pod.
[ccamacho@nyctea]$ ssh -i .ssh/k8scluster_id_rsa root@10.0.0.253
</code></pre>
<p>The default IP segment assigned for the cluster nodes is the 10.0.0.0/24 network,
where the IPs are assigned in order, first the controller nodes, then the computes, and the last
IP for the services pod. In this example, given that the kubeinit_spec is <code>k8s-libvirt-1-1-1</code>
we will deploy a vanilla Kubernetes cluster on top of libvirt with one controller, one compute,
and using a single hypervisor (explained in the same order than the spec content).</p>
<p>This same configuration is deployed periodically, you can check the status of the execution in the
<a href="https://ci.kubeinit.org/file/kubeinit-ci/jobs/okd-libvirt-1-1-1-h-periodic-pid-weekly-u/index.html">periodic job page</a></p>
<h3 id="considerations">Considerations</h3>
<p>After having the cluster deployed, some considerations include determining from now on
in the applications deployments the OS of the guests where the workloads will run.</p>
<p>The behavior I was able to see is that the Linux deployment tried to be scheduled in the
Windows compute nodes, which at some point it just timed-out.</p>
<p>So make sure your deployments use the nodeSelector like:</p>
<pre><code class="language-bash">nodeSelector:
kubernetes.io/os: linux
</code></pre>
<p>or</p>
<pre><code class="language-bash">nodeSelector:
kubernetes.io/os: windows
</code></pre>
<h2 id="conclusions">Conclusions</h2>
<p>While it was hard to make it work at first, having
Windows workloads integrated into some of the use cases,
proves how easy it was to extend Kubeinit’s core architectural
structure with new types of nodes, and new types of workloads.</p>
<p>This new use case enables other types of workloads that might
benefit other completely different usages of ‘the cloud’ we were used to seeing.</p>
<p>If you are interested in checking the PR’s code, they are
<a href="https://github.com/Kubeinit/kubeinit/pull/664">664</a>,
<a href="https://github.com/Kubeinit/kubeinit/pull/668">668</a>,
<a href="https://github.com/Kubeinit/kubeinit/pull/669">669</a>,
<a href="https://github.com/Kubeinit/kubeinit/pull/672">672</a>, and
<a href="https://github.com/Kubeinit/kubeinit/pull/676">676</a>.</p>
<p>There might be still features not working properly or connectivity issues between pods as that is something I didn’t have time to properly
test so far.
The architectural <a href="https://drive.google.com/file/d/1l9AHJ_60SNWYFVPWgO4C__U0bwPJvVIT/view?usp=sharing">diagrams</a> are available
online for further edits and references.</p>
<h2 id="update-log">Update log:</h2>
<div style="font-size:10px">
<blockquote>
<p><strong>2022/07/01:</strong> Initial version and minor edits.</p>
</blockquote>
</div>
<h2 id="the-end">The end</h2>
<p>If you like this post, please try the code, raise issues, and ask for more details, features or
anything that you feel interested in. Also it would be awesome if you become a stargazer to catch up
updates and new features.</p>
<p>This is the project’s main <a href="https://github.com/kubeinit/kubeinit">repository</a>.</p>
<p>Happy KubeIniting!</p>
Learning how to deploy OpenShift with KubeInitKubeInit is an Ansible collection to ease the deployment of multiple Kubernetes distributions. This post will show you how to use it to deploy OpenShift in your infrastructure. Note 2021/10/13: DEPRECATED - This tutorial only works with kubeinit 1.0.2 make...2021-03-12T00:00:00+00:00https://www.pubstack.com/blog/2021/03/12/Learning-how-to-deploy-OpenShift-with-KubeInitCarlos Camacho<p>KubeInit is an Ansible collection to ease
the deployment of multiple Kubernetes distributions.
This post will show you how to use it to deploy
OpenShift in your infrastructure.</p>
<blockquote>
<p><strong><em>Note 2021/10/13:</em></strong> DEPRECATED - This tutorial only works with
<a href="https://github.com/Kubeinit/kubeinit/releases/tag/1.0.2">kubeinit 1.0.2</a> make
sure you use this version of the code if you are following this tutorial, or
<a href="https://docs.kubeinit.org/">refer to the documentation</a> to use the latest code.</p>
</blockquote>
<h1 id="tldr">TL;DR;</h1>
<p>This post will show you the command and the parameters
that needs to be configured to deploy OpenShift (4.7).</p>
<h3 id="prerequirements">Prerequirements</h3>
<p>Adjust your inventory file accordingly to what you will like to deploy.
Please, make sure you read older posts to understand KubeInit’s deployment
workflow, or give the docs a try
at <a href="https://docs.kubeinit.com/">https://docs.kubeinit.com/</a>.</p>
<h3 id="openshift-registry-token">OpenShift registry token</h3>
<p>Then the next step is to fetch a valid registry tokens list (pullsecrets) from
<a href="https://cloud.redhat.com/openshift/install/pull-secret">https://cloud.redhat.com/openshift/install/pull-secret</a>.</p>
<p>You should get a very long json object as a dictionary with the credential details
we need to adjust our deployment pull secrets to be able to “fetch”
the images accordingly.</p>
<p>The pullsecret syntax should look like:</p>
<pre><code class="language-bash">{
"auths":{
"cloud.openshift.com":{"auth":"TOKEN1_GOES_HERE","email":"email@example"},
"quay.io":{"auth":"TOKEN2_GOES_HERE","email":"email@example.com"},
"registry.connect.redhat.com":{"auth":"TOKEN3_GOES_HERE","email":"email@example.com"},
"registry.redhat.io":{"auth":"TOKEN4_GOES_HERE","email":"email@example.com"}
}
}
</code></pre>
<h3 id="deploying">Deploying</h3>
<p>The deployment procedure is the same
as it is for all the other Kubernetes distributions that can be
deployed with KubeInit.</p>
<p>Please follow the <a href="http://docs.kubeinit.com/usage.html">usage documentation</a>
to understand the system’s requirements and the required host supported
Linux distributions.</p>
<p>At the moment we will deploy OpenShift 4.7.0 (the latest release available),
if you need to deploy other releases adjust the value of the
<code>kubeinit_okd_registry_release_tag</code> variable.</p>
<pre><code class="language-bash"># Choose the distro
distro=okd
# Run the deployment command
ansible-playbook \
-v \
--user root \
-i ./hosts/$distro/inventory \
--become \
--become-user root \
-e kubeinit_okd_openshift_deploy=True \
-e kubeinit_okd_openshift_registry_token_cloud_openshift_com="TOKEN1_GOES_HERE" \
-e kubeinit_okd_openshift_registry_token_quay_io="TOKEN2_GOES_HERE" \
-e kubeinit_okd_openshift_registry_token_registry_connect_redhat_com="TOKEN3_GOES_HERE" \
-e kubeinit_okd_openshift_registry_token_registry_redhat_io="TOKEN4_GOES_HERE" \
-e kubeinit_okd_openshift_registry_token_email="email@example.com" \
./playbooks/$distro.yml
</code></pre>
<p>Note: The variables required to override an
OpenShift deployment are
<code>kubeinit_okd_openshift_deploy</code>,
<code>kubeinit_okd_openshift_registry_token_cloud_openshift_com</code>,
<code>kubeinit_okd_openshift_registry_token_quay_io</code>,
<code>kubeinit_okd_openshift_registry_token_registry_connect_redhat_com</code>,
<code>kubeinit_okd_openshift_registry_token_registry_redhat_io</code>, and
<code>kubeinit_okd_openshift_registry_token_email</code>.</p>
<h3 id="conclusions">Conclusions</h3>
<p>Deploying also OpenShift demostrates how
flexible KubeInit can be.
With a few changes we can deploy also downstream Kubernetes
distributions ready to be used for production grade deployments.</p>
<p>Once <a href="https://github.com/Kubeinit/kubeinit/pull/219/files">#219</a>
is merged you should be able to run this.</p>
<h3 id="the-end">The end</h3>
<p>If you like this post, please try the code, raise issues, and ask for more details, features or
anything that you feel interested in. Also it would be awesome if you become a stargazer to catch up
updates and new features.</p>
<p>This is the main project <a href="https://github.com/kubeinit/kubeinit">repository</a>.</p>
<p>Happy KubeIniting!</p>
Multihost deployments with KubeinitUntil now there was possible to deploy Kubeinit in a single host configuration, this means to deploy all the guest VMs in the same hypervisor. Now, we can decouple the deployment architecture in multiple hosts. Note 2021/10/13: DEPRECATED - This...2021-02-20T00:00:00+00:00https://www.pubstack.com/blog/2021/02/20/Multihost-deployment-with-kubeinitCarlos Camacho<p>Until now there was possible to deploy Kubeinit in
a single host configuration, this means to deploy
all the guest VMs in the same hypervisor.
Now, we can decouple the deployment architecture in multiple
hosts.</p>
<blockquote>
<p><strong><em>Note 2021/10/13:</em></strong> DEPRECATED - This tutorial only works with
<a href="https://github.com/Kubeinit/kubeinit/releases/tag/1.0.2">kubeinit 1.0.2</a> make
sure you use this version of the code if you are following this tutorial, or
<a href="https://docs.kubeinit.org/">refer to the documentation</a> to use the latest code.</p>
</blockquote>
<h1 id="tldr">TL;DR;</h1>
<p>This post will show how to adjust the inventory files to
deploy Kubeinit in multiple hosts.</p>
<h3 id="network-architecture">Network architecture</h3>
<p>From the official docs,
OVN (Open Virtual Network) is a series of daemons for the Open vSwitch that translate
virtual network configurations into OpenFlow.
OVN is licensed under the open source Apache 2 license.</p>
<p>OVN provides a higher-layer of abstraction than Open vSwitch,
working with logical routers and logical switches, rather than flows.
OVN is intended to be used by cloud management software (CMS).</p>
<p>Open vSwitch is a free and open source multi-layer software switch,
which is used to manage the traffic between virtual machines and
physical or logical networks.</p>
<p>The following is the current network architecture of a Kubeinit deployment.</p>
<p><img src="/static/kubeinit/net/ovn.png" alt="" /></p>
<h3 id="adjusting-the-inventory-files">Adjusting the inventory files</h3>
<p>To deploy Kubeinit in multiple hosts the only
changes that needs to be made are in the inventory</p>
<p>In this case by default there is only one hypervisor enabled (nyctea).</p>
<pre><code class="language-bash">[hypervisor_nodes]
hypervisor-01 ansible_host=nyctea
# hypervisor-02 ansible_host=tyto
# hypervisor-03 ansible_host=strix
# hypervisor-04 ansible_host=otus
</code></pre>
<p>The only action that needs to be taken is uncomment the lines to
enable the extra hosts required.</p>
<p>The next step is to determine where to deploy each guest.
In this example all guests are deployed to hypervisor-01,
make the adjustments as required.</p>
<pre><code class="language-bash">[okd_master_nodes]
okd-master-01 ansible_host=10.0.0.1 mac=52:54:00:34:84:26 interfaceid=47f2be09-9cde-49d5-bc7b-76189dfcb8a9 target=hypervisor-01 type=virtual
okd-master-02 ansible_host=10.0.0.2 mac=52:54:00:53:75:61 interfaceid=fb2028cf-dfb9-4d17-827d-3fae36cb3e98 target=hypervisor-01 type=virtual
okd-master-03 ansible_host=10.0.0.3 mac=52:54:00:96:67:20 interfaceid=d43b705e-86ce-4955-bbf4-3888210af82e target=hypervisor-01 type=virtual
</code></pre>
<p>For example, let’s deploy each master node in a different hypervisor:</p>
<pre><code class="language-bash">[okd_master_nodes]
okd-master-01 ansible_host=10.0.0.1 mac=52:54:00:34:84:26 interfaceid=47f2be09-9cde-49d5-bc7b-76189dfcb8a9 target=hypervisor-01 type=virtual
okd-master-02 ansible_host=10.0.0.2 mac=52:54:00:53:75:61 interfaceid=fb2028cf-dfb9-4d17-827d-3fae36cb3e98 target=hypervisor-02 type=virtual
okd-master-03 ansible_host=10.0.0.3 mac=52:54:00:96:67:20 interfaceid=d43b705e-86ce-4955-bbf4-3888210af82e target=hypervisor-03 type=virtual
</code></pre>
<p>Now, okd-master-01 will be deployed to hypervisor-01, okd-master-02 will be deployed to hypervisor-02,
and okd-master-03 will be deployed to hypervisor-03.</p>
<h3 id="requirements">Requirements</h3>
<p>To deploy Kubeinit in multiple hypervisors, the only requirement is
to have passwordless root access to the hosts from the machine executing <code>ansible-playbook</code>.</p>
<h3 id="deploying">Deploying</h3>
<p>The deployment procedure is the same
as it is for all the other Kubernetes distributions that can be
deployed with KubeInit.</p>
<p>Please follow the <a href="http://docs.kubeinit.com/usage.html">usage documentation</a>
to understand the system’s requirements and the required host supported
Linux distributions.</p>
<pre><code class="language-bash"># Choose the distro
distro=okd
# Run the deployment command
git clone https://github.com/kubeinit/kubeinit.git
cd kubeinit
ansible-playbook \
--user root \
-v -i ./hosts/$distro/inventory \
--become \
--become-user root \
./playbooks/$distro.yml
</code></pre>
<h3 id="conclusions">Conclusions</h3>
<p>Been able to decouple the cluster across multiple hosts allows
to scale and deploy production ready environments, for different use cases.
The overlay network deployed with OVN across the hosts provides a simple abstraction
layer to communicate all the guests in the cluster.</p>
<p>Special thanks to
<a href="http://dani.foroselectronica.es/">@dalvarez</a>
and
<a href="https://github.com/danielmellado">@dmellado</a>
for their help and insights.</p>
<h3 id="the-end">The end</h3>
<p>If you like this post, please try the code, raise issues, and ask for more details, features or
anything that you feel interested in. Also it would be awesome if you become a stargazer to catch up
updates and new features.</p>
<p>This is the main project <a href="https://github.com/kubeinit/kubeinit">repository</a>.</p>
<p>Happy KubeIniting!</p>
How to deploy Amazon EKS-D on top of a Libvirt host with KubeInit in 15 minutesAnd there is a new distro in town, today we will speak about Amazon EKS Distro (EKS-D), a Kubernetes distribution based on Amazon Elastic Kubernetes Service (Amazon EKS) and how to deploy it in a Libvirt host with almost or...2020-12-07T00:00:00+00:00https://www.pubstack.com/blog/2020/12/07/How-to-deploy-Amazon-EKS-D-in-a-Libvirt-hostCarlos Camacho<p>And there is a new distro in town, today we will speak about
Amazon EKS Distro (EKS-D), a Kubernetes distribution based on
Amazon Elastic Kubernetes Service (Amazon EKS)
and how to deploy it in a Libvirt host with almost or zero
effort in a few minutes.</p>
<blockquote>
<p><strong><em>Note 2021/10/13:</em></strong> DEPRECATED - This tutorial only works with
<a href="https://github.com/Kubeinit/kubeinit/releases/tag/1.0.2">kubeinit 1.0.2</a> make
sure you use this version of the code if you are following this tutorial, or
<a href="https://docs.kubeinit.org/">refer to the documentation</a> to use the latest code.</p>
</blockquote>
<h1 id="tldr">TL;DR;</h1>
<p>We will deploy using KubeInit a Kubernetes cluster based in Amazon’s EKS distribution.
<strong>Disclaimer:</strong> This is not completely implemented as there are still some EKS-D images
to be added to the deployment.</p>
<h3 id="components">Components</h3>
<p>Here is a list of the components that are currently deployed:</p>
<ul>
<li>Guests OS: CentOS 8 (8.2.2004)</li>
<li>Kubernetes distribution: EKS-D</li>
<li>Infrastructure provider: Libvirt</li>
<li>A service machine with the following services:
<ul>
<li>HAProxy: 1.8.23 2019/11/25</li>
<li>Apache: 2.4.37</li>
<li>NFS (nfs-utils): 2.3.3</li>
<li>DNS (bind9): 9.11.13</li>
<li>Disconnected docker registry: v2</li>
<li>Skopeo: 0.1.40</li>
</ul>
</li>
<li>Control plane services:
<ul>
<li>Kubelet 1.18.4</li>
<li>CRI-O: 1.18.4</li>
<li>Podman: 1.6.4</li>
</ul>
</li>
<li>Controller nodes: 3</li>
<li>Worker nodes: 1</li>
</ul>
<h3 id="deploying">Deploying</h3>
<p>The deployment procedure is the same
as it is for all the other Kubernetes distributions that can be
deployed with KubeInit.</p>
<p><strong>Note:</strong> Make sure you can connect to your hypervisor (called nyctea)
with passwordless access.</p>
<p>Please follow the <a href="http://docs.kubeinit.com/usage.html">usage documentation</a>
to understand the system’s requirements and the required host supported
Linux distributions.</p>
<pre><code class="language-bash"># Choose the distro
distro=eks
# Run the deployment command
git clone https://github.com/kubeinit/kubeinit.git
cd kubeinit
ansible-playbook \
--user root \
-v -i ./hosts/$distro/inventory \
--become \
--become-user root \
./playbooks/$distro.yml
</code></pre>
<p>You can also <a href="https://www.pubstack.com/blog/2020/09/11/Deploying-KubeInit-from-a-container.html">run it from a container</a>
to avoid compatibility issues between your set up and the required libraries.</p>
<p>This will deploy by default a 3 controllers 1 compute cluster.</p>
<p>The deployment time was fairly quick (around 15 minutes):</p>
<pre><code class="language-bash">.
.
.
" description: snapshot-validation-webhook container image",
" image:",
" uri: public.ecr.aws/eks-distro/kubernetes-csi/external-snapshotter/snapshot-validation-webhook:v3.0.2-eks-1-18-1",
" name: snapshot-validation-webhook-image",
" os: linux",
" type: Image",
" gitTag: v3.0.2",
" name: external-snapshotter",
" date: \"2020-12-01T00:05:35Z\""
]
}
META: ran handlers
META: ran handlers
PLAY RECAP *****************************************************************************************************************
hypervisor-01 : ok=188 changed=93 unreachable=0 failed=0 skipped=43 rescued=0 ignored=4
real 17m12.889s
user 1m24.846s
sys 0m24.366s
</code></pre>
<p>Let’s run some commands in the cluster.</p>
<pre><code class="language-bash">[root@eks-service-01 ~]# curl --user registryusername:registrypassword https://eks-service-01.clustername0.kubeinit.local:5000/v2/_catalog
{
"repositories":[
"aws-iam-authenticator",
"coredns",
"csi-snapshotter",
"eks-distro/coredns/coredns",
"eks-distro/etcd-io/etcd",
"eks-distro/kubernetes/go-runner",
"eks-distro/kubernetes/kube-apiserver",
"eks-distro/kubernetes/kube-controller-manager",
"eks-distro/kubernetes/kube-proxy",
"eks-distro/kubernetes/kube-proxy-base",
"eks-distro/kubernetes/kube-scheduler",
"eks-distro/kubernetes/pause",
"eks-distro/kubernetes-csi/external-attacher",
"eks-distro/kubernetes-csi/external-provisioner",
"eks-distro/kubernetes-csi/external-resizer",
"eks-distro/kubernetes-csi/external-snapshotter/csi-snapshotter",
"eks-distro/kubernetes-csi/external-snapshotter/snapshot-controller",
"eks-distro/kubernetes-csi/external-snapshotter/snapshot-validation-webhook",
"eks-distro/kubernetes-csi/livenessprobe",
"eks-distro/kubernetes-csi/node-driver-registrar",
"eks-distro/kubernetes-sigs/aws-iam-authenticator",
"eks-distro/kubernetes-sigs/metrics-server",
"etcd",
"external-attacher",
"external-provisioner",
"external-resizer",
"go-runner",
"kube-apiserver",
"kube-controller-manager",
"kube-proxy",
"kube-proxy-base",
"kube-scheduler",
"livenessprobe",
"metrics-server",
"node-driver-registrar",
"pause",
"snapshot-controller",
"snapshot-validation-webhook"
]
}
</code></pre>
<p>And check some of the deployed resources.</p>
<pre><code class="language-bash">[root@eks-service-01 ~]# kubectl describe pods etcd-eks-master-01.kubeinit.local -n kube-system
Name: etcd-eks-master-01.kubeinit.local
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: eks-master-01.kubeinit.local/10.0.0.1
Start Time: Sun, 06 Dec 2020 19:51:25 +0000
Labels: component=etcd
tier=control-plane
Annotations: kubeadm.kubernetes.io/etcd.advertise-client-urls: https://10.0.0.1:2379
kubernetes.io/config.hash: 3be258678a84985dbdb9ae7cb90c6a97
kubernetes.io/config.mirror: 3be258678a84985dbdb9ae7cb90c6a97
kubernetes.io/config.seen: 2020-12-06T19:51:18.652592779Z
kubernetes.io/config.source: file
Status: Running
IP: 10.0.0.1
IPs:
IP: 10.0.0.1
Controlled By: Node/eks-master-01.kubeinit.local
Containers:
etcd:
Container ID: cri-o://7a52bd0b80feb8c861c502add4c252e83c7e4a1f904a376108e3f6f787fd342c
Image: eks-service-01.clustername0.kubeinit.local:5000/etcd:v3.4.14-eks-1-18-1
</code></pre>
<h3 id="conclusions">Conclusions</h3>
<p>Either if this new distribution can be useful or not for your use cases, there is
for sure value on having in both cloud and on-premise the same architecture and
services consistency.</p>
<h3 id="the-end">The end</h3>
<p>If you like this post, please try the code, raise issues, and ask for more details, features or
anything that you feel interested in. Also it would be awesome if you become a stargazer to catch up
updates and new features.</p>
<p>This is the main project <a href="https://github.com/kubeinit/kubeinit">repository</a>.</p>
<p>Happy KubeIniting!</p>
KubeInit 4-in-1 - Deploying multiple Kubernetes distributions (K8S, OKD, RKE, and CDK) with the same platformOne of the KubeInit’s pillars is to define a common framework to deploy multiple Kubernetes distributions, once you finish the deployment you should be able to use the specific distro tooling to mange the lifecycle of your deployment (or multiple...2020-10-19T00:00:00+00:00https://www.pubstack.com/blog/2020/10/19/KubeInit-4-in-1-Deploying-multiple-Kubernetes-distributions-K8S-OKD-RKE-and-CDK-with-the-same-platformCarlos Camacho<p>One of the KubeInit’s pillars is to define a common framework to deploy
multiple Kubernetes distributions, once you finish the deployment you should
be able to use the specific distro tooling to mange the lifecycle of your deployment
(or multiple deployments).</p>
<blockquote>
<p><strong><em>Note 2021/10/13:</em></strong> DEPRECATED - This tutorial only works with
<a href="https://github.com/Kubeinit/kubeinit/releases/tag/1.0.2">kubeinit 1.0.2</a> make
sure you use this version of the code if you are following this tutorial, or
<a href="https://docs.kubeinit.org/">refer to the documentation</a> to use the latest code.</p>
</blockquote>
<p>Using KubeInit you should be able to reuse a common set of third party services,
and infrastructure deployment assets with any already integrated distro.</p>
<p>The current distributions of Kubernetes that should be deployable are:</p>
<ul>
<li>Origin Kubernetes Distribution</li>
<li>Kubernetes</li>
<li>Rancher Kubernetes Distribution</li>
<li>Canonical Distribution of Kubernetes</li>
</ul>
<h1 id="tldr">TL;DR;</h1>
<p><strong>Disclaimer:</strong> This does not fully work, yet xD…
Multiple scenarios might be broken, the DNS might not work properly
and the HAProxy service might be failing also, this is the reason why is not documented
in the <a href="https://docs.kubeinit.com">official docs</a>. Any contribution is and always be
welcomed. Yet, the deployment should finish successfully.</p>
<h3 id="the-roles-and-playbooks-structure">The roles and playbooks structure</h3>
<p>Every supported distro has a role folder called like <code>kubeinit_<distro></code>, this means,
<code>kubeinit_okd</code>, <code>kubeinit_k8s</code>, <code>kubeinit_rke</code>, and <code>kubeinit_cdk</code>.</p>
<p>Then, there is a specific playbook for each distro named using their distribution
initials like <code>okd</code>, <code>k8s</code>, <code>rke</code>, and <code>cdk</code>.</p>
<p>This means that the workflow to deploy any distro must be the same for every one of them.</p>
<h3 id="deploying">Deploying</h3>
<p>Choose the Kubernetes distribution you will use:</p>
<ul>
<li>Origin Kubernetes Distribution: okd</li>
<li>Kubernetes: k8s</li>
<li>Rancher Kubernetes Distribution: rke</li>
<li>Canonical Distribution of Kubernetes : cdk</li>
</ul>
<pre><code class="language-bash"># Choose the distro
# distro=k8s
# distro=rke
# distro=cdk
distro=okd
# Run the deployment command
git clone https://github.com/kubeinit/kubeinit.git
cd kubeinit
ansible-playbook \
--user root \
-v -i ./hosts/$distro/inventory \
--become \
--become-user root \
./playbooks/$distro.yml
</code></pre>
<h3 id="conclusions">Conclusions</h3>
<p>Being able to deploy multiple Kubernetes distributions in an easy, quick,
reproducible, and using the same interface allow users to test and evaluate
them to see which one (or many) fits better their use cases.</p>
<h3 id="the-end">The end</h3>
<p>If you like this post, please try the code, raise issues, and ask for more details, features or
anything that you feel interested in. Also it would be awesome if you become a stargazer to catch up
updates and new features.</p>
<p>This is the main project <a href="https://github.com/kubeinit/kubeinit">repository</a>.</p>
<p>Happy KubeIniting!</p>
<hr />
<p><img src="/static/kubeinit/yaml.jpeg" alt="" /></p>
Deploying multiple KubeInit clusters in the same hypervisorThe Ansible inventory file defines the hosts and groups of hosts upon which commands, modules, and tasks in a playbook operate. In this post it will be explained the way to update the inventory files to allow deploying multiple KubeInit...2020-10-04T00:00:00+00:00https://www.pubstack.com/blog/2020/10/04/Multiple-KubeInit-clusters-in-the-same-hypervisorCarlos Camacho<p>The Ansible inventory file defines the hosts and groups of hosts upon which commands,
modules, and tasks in a playbook operate.
In this post it will be explained the way to update the inventory files to allow
deploying multiple KubeInit clusters in the same hypervisor.</p>
<blockquote>
<p><strong><em>Note 2021/10/13:</em></strong> DEPRECATED - This tutorial only works with
<a href="https://github.com/Kubeinit/kubeinit/releases/tag/1.0.2">kubeinit 1.0.2</a> make
sure you use this version of the code if you are following this tutorial, or
<a href="https://docs.kubeinit.org/">refer to the documentation</a> to use the latest code.</p>
</blockquote>
<h1 id="tldr">TL;DR;</h1>
<p>We will show the required changes to the inventory file to
deploy more than one KubeInit cluster in the same host.</p>
<h3 id="basic-information">Basic information</h3>
<p>All the change required to achieve the goal of this post will be done in
the same file.</p>
<p>As an example, in this post we will use the <a href="https://github.com/Kubeinit/kubeinit/blob/master/kubeinit/hosts/okd/inventory">OKD inventory file</a>.</p>
<h3 id="steps">Steps</h3>
<p>The following steps are required to deploy a second (or as many are required) KubeInit clusters in the same host.</p>
<h4 id="1-duplicate-the-main-inventory-file">1. Duplicate the main inventory file.</h4>
<pre><code>echo "NEW_ID MUST BE AN INTEGER"
new_id=2
cp inventory inventory$new_id
</code></pre>
<h4 id="2-adjust-network-parameters">2. Adjust network parameters.</h4>
<p>The default internal network used is 10.0.0.0/24
so we need to change it to a new range.</p>
<p>We will change from the range 10.0.0 to 10.0.2 (referring to step 1 <em>new_id=2</em>)</p>
<pre><code>sed -i "s/10\.0\.0/10\.0\.$new_id/g" inventory$new_id
</code></pre>
<h4 id="3-adjust-the-network-and-bridges-names">3. Adjust the network and bridges names.</h4>
<p>We will create new bridges and networks for the new
deployment.</p>
<p>We will change from i.e. kimgtnet0 to kimgtnet2 (referring to step 1 <em>new_id=2</em>)</p>
<pre><code>sed -i "s/kimgtnet0/kimgtnet$new_id/g" inventory$new_id
sed -i "s/kimgtbr0/kimgtbr$new_id/g" inventory$new_id
sed -i "s/kiextbr0/kiextbr$new_id/g" inventory$new_id
</code></pre>
<h4 id="4-replace-the-hosts-mac-addresses-for-new-addresses">4. Replace the hosts MAC addresses for new addresses.</h4>
<p>We will randomly replace the MAC addresses for all
guest definitions. The following command will shuffle
the MAC addresses in the file each time is executed.
<em>Note:</em> awk does not support hexadecimal number operations,
and it is no possible to replace characters by colons.</p>
<pre><code>awk -v seed="$RANDOM" '
BEGIN { srand(seed) }
{ while(sub(/52:54:00:([[:xdigit:]]{1,2}:){2}[[:xdigit:]]{1,2}/,
"52,,,54,,,00,,,"int(10+rand()*(99-10+1))",,,"int(10+rand()*(99-10+1))",,,"int(10+rand()*(99-10+1))));
print > "tmp"
}
END { print "MAC shuffled" }
' "inventory$new_id"
mv tmp inventory$new_id
sed -i "s/,,,/:/g" inventory$new_id
</code></pre>
<h4 id="5-change-the-guest-names">5. Change the guest names.</h4>
<p>VMs are cleaned every time the host is provisioned, if their names are
not updated they will be removed every time.</p>
<p>We will change from okd- to okd2- (referring to step 1 <em>new_id=2</em>)</p>
<pre><code>sed -i "s/okd-/okd$new_id-/g" inventory$new_id
sed -i "s/clustername0/clustername$new_id/g" inventory$new_id
</code></pre>
<h4 id="6-run-the-deployment-command-using-the-new-inventory-file">6. Run the deployment command using the new inventory file.</h4>
<p>The deployment command should remain exactly as it was,
just update the reference to the new inventory file.</p>
<pre><code># Use the following inventory in your deployment command
-i ./hosts/okd/inventory$new_id
</code></pre>
<h4 id="7-cleaning-up-the-host">7. Cleaning up the host.</h4>
<p>Just in case you need to clean things up.</p>
<p><strong>Warning:</strong> This will destroy any guest in the
host, run with caution.</p>
<pre><code>for i in $(virsh -q list | awk '{ print $2 }'); do
virsh destroy $i;
virsh undefine $i --remove-all-storage;
done;
</code></pre>
<h3 id="conclusions">Conclusions</h3>
<p>Being able to deploy multiple clusters in the same hypervisor it will allow you
to test multiple architectures and scenarios without the need to re-provision
completely the environment.</p>
<h3 id="the-end">The end</h3>
<p>If you like this post, please try the code, raise issues, and ask for more details, features or
anything that you feel interested in. Also it would be awesome if you become a stargazer to catch up
updates and new features.</p>
<p>This is the main project <a href="https://github.com/kubeinit/kubeinit">repository</a>.</p>
<p>Happy KubeIniting!</p>
Persistent volumes and claims in KubeInitPods are ephemeral and any information stored in them is not persistent, this means that every time you restart or create pods from the same application any internal data will be lost. Note 2021/10/13: DEPRECATED - This tutorial only works...2020-09-28T00:00:00+00:00https://www.pubstack.com/blog/2020/09/28/Persistent-volumes-and-claims-in-KubeInitCarlos Camacho<p>Pods are ephemeral and any information stored in them
is not persistent, this means that every time you restart or
create pods from the same application any internal data will be
lost.</p>
<blockquote>
<p><strong><em>Note 2021/10/13:</em></strong> DEPRECATED - This tutorial only works with
<a href="https://github.com/Kubeinit/kubeinit/releases/tag/1.0.2">kubeinit 1.0.2</a> make
sure you use this version of the code if you are following this tutorial, or
<a href="https://docs.kubeinit.org/">refer to the documentation</a> to use the latest code.</p>
</blockquote>
<p>The solution for this is to use persistent volumes so pods can persist
their data every time they are restarted, a volume is YAKR (Yet Another Kubernetes Resource).</p>
<p><img src="/static/kubeinit/pv/stone_pv.png" alt="" /></p>
<h1 id="tldr">TL;DR;</h1>
<p>We will create a static PV/PVC to be used with an example application.</p>
<h3 id="basic-information">Basic information</h3>
<p>From [1], lets define some basic concepts.</p>
<p><img src="/static/kubeinit/pv/cs_storage_pvc_pv.png" alt="" /></p>
<ul>
<li>
<p>Cluster: By default, every cluster is set up with a plug-in to provision file storage.
You can choose to install other add-ons, such as the one for block storage.
To use storage in a cluster, you must create a persistent volume claim, a persistent volume and a physical storage instance.
When you delete the cluster, you have the option to delete related storage instances.</p>
</li>
<li>
<p>App: To read from and write to your storage instance, you must mount the persistent volume claim (PVC) to your app.
Different storage types have different read-write rules. For example, you can mount multiple pods to the same PVC for file storage.
Block storage comes with a RWO (ReadWriteOnce) access mode so that you can mount the storage to one pod only.</p>
</li>
<li>
<p>Persistent volume claim (PVC): A PVC is the request to provision persistent storage with a specific type and configuration.
To specify the persistent storage flavor that you want, you use Kubernetes storage classes.
The cluster admin can define storage classes.
When you create a PVC, the request is sent to the storage provider.
If the requested configuration does not exist, the storage is not created.</p>
</li>
<li>
<p>Persistent volume (PV): A PV is a virtual storage instance that is added as a volume to the cluster.
The PV points to a physical storage device in your account and abstracts the API that is used to communicate with the storage device.
To mount a PV to an app, you must have a matching PVC. Mounted PVs appear as a folder inside the container’s file system.</p>
</li>
<li>
<p>Physical storage: A physical storage instance that you can use to persist your data.
Examples of physical storage in Kubernetes clusters include File Storage, Block Storage, Object Storage, and local worker node storage.
However, data that is stored on a physical storage instance is not backed up automatically.
Depending on the type of storage that you use, different methods exist to set up backup and restore solutions.</p>
</li>
</ul>
<h3 id="static-provisioning">Static provisioning</h3>
<p>If you have an existing persistent storage device,
you can use static provisioning to make the storage instance available to your cluster.</p>
<h4 id="how-does-it-work">How does it work?</h4>
<p>Static provisioning is a feature that is native to Kubernetes and that allows cluster administrators
to make existing storage devices available to a cluster. As a cluster administrator, you must know
the details of the storage device, its supported configurations, and mount options.
To make existing storage available to a cluster user, you must manually create the storage device, a PV, and a PVC.</p>
<h4 id="setting-up-a-static-pvpvc-in-a-kubeinits-nfs-share">Setting up a static PV/PVC in a KubeInit’s NFS share</h4>
<p>Execute all the following steps from the service machine.</p>
<pre><code># We create a new folder in the main NFS share
mkdir -p /var/nfsshare/test-nfs
chmod -R 777 /var/nfsshare/test-nfs
chown -R nobody:nobody /var/nfsshare/test-nfs
# We define the resources for the PV, PVC, and an example pod to use them
cat << EOF > ~/test_nfs_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-nfs-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
storageClassName: nfs01testnfs
persistentVolumeReclaimPolicy: Retain
nfs:
path: /var/nfsshare/test-nfs
server: 10.0.0.100
EOF
cat << EOF > ~/test_nfs_pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-nfs-pvc
spec:
volumeName: test-nfs-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: nfs01testnfs
volumeMode: Filesystem
EOF
cat <<EOF > ~/pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nfs-test
labels:
name: frontendhttp
spec:
containers:
- name: nginx
image: nginx:1.7.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: http-server
volumeMounts:
- mountPath: /var/nfsshare/testmount
name: pvol
volumes:
- name: pvol
persistentVolumeClaim:
claimName: test-nfs-pvc
EOF
# We create those resources
export KUBECONFIG=~/install_dir/auth/kubeconfig
oc create -f ~/test_nfs_pv.yaml
oc create -f ~/test_nfs_pvc.yaml
oc create -f ~/pod.yaml
# We test the PV is bound to the PVC
oc get pv
oc get pvc
kubectl get pods
showmount -e 10.0.0.100
# Now we check that our test application is mounting the volume correctly
# As you can see we have the NFS PV mounted in /var/nfsshare/testmount
# we connect to the container and put something in the PV
kubectl exec --stdin --tty nfs-test -- /bin/bash
echo "hello world" > /var/nfsshare/testmount/asdf
exit
cat /var/nfsshare/test-nfs/asdf
# hello world
</code></pre>
<p>This proves how easy is to create persistent volumes and claims to be used in KubeInit.</p>
<h3 id="next-steps">Next steps</h3>
<p>Dynamic provisioning is a feature that is native to Kubernetes and that allows a cluster developer
to order storage with a pre-defined type and configuration without knowing all the details about how
to provision the physical storage device. To abstract the details for the specific storage type, the
cluster admin must create storage classes that the developer can use.</p>
<p>Ideally assigning the persistent volumes should be dynamically assigned. In the future
this should be natively be part of KubeInit.</p>
<h3 id="references">References</h3>
<ol>
<li><a href="https://cloud.ibm.com/docs/containers?topic=containers-kube_concepts">https://cloud.ibm.com/docs/containers?topic=containers-kube_concepts</a></li>
</ol>
<h3 id="the-end">The end</h3>
<p>If you like this post, please try the code, raise issues, and ask for more details, features or
anything that you feel interested in. Also it would be awesome if you become a stargazer to catch up
updates and new features.</p>
<p>This is the main project <a href="https://github.com/kubeinit/kubeinit">repository</a>.</p>
<p>Happy KubeIniting!</p>
<hr />
<p><img src="/static/kubeinit/pv/meme.jpg" alt="" /></p>
Deploying KubeInit from a containerThere are some use cases where users have old libraries versions, old environments, or in general, difficulties to execute the ansible-playbook command to deploy KubeInit, due to unrelated issues. The following steps will help users to deploy KubeInit by launching...2020-09-11T00:00:00+00:00https://www.pubstack.com/blog/2020/09/11/Deploying-KubeInit-from-a-containerCarlos Camacho<p>There are some use cases where users have old libraries versions,
old environments, or in general, difficulties to execute the
ansible-playbook command to deploy KubeInit, due to unrelated
issues. The following steps will help users to deploy KubeInit by
launching the ansible-playbook command from a container.</p>
<blockquote>
<p><strong><em>Note 2021/10/13:</em></strong> DEPRECATED - This tutorial only works with
<a href="https://github.com/Kubeinit/kubeinit/releases/tag/1.0.2">kubeinit 1.0.2</a> make
sure you use this version of the code if you are following this tutorial, or
<a href="https://docs.kubeinit.org/">refer to the documentation</a> to use the latest code.</p>
</blockquote>
<p><img src="/static/kubeinit/kubeinit_in_docker.png" alt="" /></p>
<h1 id="tldr">TL;DR;</h1>
<p>We will describe how to run KubeInit within a container.</p>
<h3 id="requirements">Requirements</h3>
<ul>
<li>Having docker or podman installed.</li>
<li>If you do not have podman (what is used in the commands bellow),
then replace podman by docker in all the commands bellow.</li>
</ul>
<h3 id="deploying-kubeinit-within-a-container">Deploying KubeInit within a container</h3>
<h4 id="building-the-image">Building the image</h4>
<p>We will clone the repository as usual, and build the container image.</p>
<pre><code class="language-bash">git clone https://github.com/Kubeinit/kubeinit.git
cd kubeinit
podman build -t kubeinit/kubeinit .
</code></pre>
<h4 id="run-the-deployment-command-from-the-recently-created-container">Run the deployment command from the recently created container</h4>
<p><strong><em>NOTE:</em></strong> Beware of the <a href="https://www.redhat.com/sysadmin/user-namespaces-selinux-rootless-containers">:z flag if you have SELinux enabled</a>
you must use it, otherwise you will get a permissions denied as the PK won’t be able to be mounted correctly.</p>
<p><strong><em>NOTE:</em></strong> Each time a change is included in any code inside
the repository <strong>BUILD THE IMAGE YOU MUST</strong>…</p>
<pre><code class="language-bash">podman run --rm -it \
-v ~/.ssh/id_rsa:/root/.ssh/id_rsa:z \
-v /etc/hosts:/etc/hosts \
kubeinit/kubeinit \
--user root \
-v -i ./hosts/okd/inventory \
--become \
--become-user root \
./playbooks/okd.yml
</code></pre>
<p>As it is clearly visible in the previous command, starting from
the 5th line, the deployment command is exactly as if we were
executing it with <code>ansible-playbook</code>.</p>
<p>What we are doing is to add <code>ansible-playbook</code> as this container
ENTRYPOINT, so you can add any variable that will be part
of the <code>ansible-playbook</code> command.</p>
<p>Easy as always and in a single deployment command
you should have your cluster in approximately 30 minutes.</p>
<h3 id="pros-and-cons-of-executing-ansible-playbook-within-a-container">Pros and Cons of executing ansible-playbook within a container</h3>
<p>This is a very opinionated section to show that sometimes running
containers can add some more extra steps that might be useful or not
depending on your environment.</p>
<h4 id="pros">Pros</h4>
<ul>
<li>No dependencies from the host.</li>
<li>Easy to run if there is no other change to make inside the collection.</li>
</ul>
<h4 id="cons">Cons</h4>
<ul>
<li>Debugging will be always harder.</li>
<li>Another layer of something that might be hard to understand.</li>
<li>More time to build the image and deploying.</li>
<li>Each time you need to make a change in any part of the code you will need to build the image again.
This in particular can be really painful, as for each small change in the code it will make
you invest some maybe unneeded extra time to be able to run the code again.</li>
</ul>
<h4 id="the-end">The end</h4>
<p>If you like this post, please try the code, raise issues, and ask for more details, features or
anything that you feel interested in. Also it would be awesome if you become a stargazer to catch up
updates and new features.</p>
<p>This is the main project <a href="https://github.com/kubeinit/kubeinit">repository</a>.</p>
<p>Happy KubeIniting!</p>
<hr />
<p><img src="/static/pod/k8s.jpg" alt="" /></p>
KubeInit External access for OpenShift/OKD deployments with LibvirtIn this post it will be described the basic network architecture when OKD is deployed using KubeInit in a KVM host. Note 2021/10/13: DEPRECATED - This tutorial only works with kubeinit 1.0.2 make sure you use this version of the...2020-08-25T00:00:00+00:00https://www.pubstack.com/blog/2020/08/25/KubeInit-External-access-for-OpenShift-OKD-deployments-with-LibvirtCarlos Camacho<p>In this post it will be described the basic network architecture when OKD is
deployed using <a href="https://github.com/kubeinit/kubeinit">KubeInit</a> in a KVM host.</p>
<blockquote>
<p><strong><em>Note 2021/10/13:</em></strong> DEPRECATED - This tutorial only works with
<a href="https://github.com/Kubeinit/kubeinit/releases/tag/1.0.2">kubeinit 1.0.2</a> make
sure you use this version of the code if you are following this tutorial, or
<a href="https://docs.kubeinit.org/">refer to the documentation</a> to use the latest code.</p>
</blockquote>
<h2 id="tldr">TL;DR;</h2>
<p>We will describe how to extend the basic network configuration to provide
external access to the cluster services by adding an external IP to the
service machine.</p>
<h3 id="what-is-kubeinit">What is KubeInit?</h3>
<p>KubeInit provides Ansible playbooks and roles for the deployment and
configuration of multiple Kubernetes distributions.</p>
<p>The main goal of KubeInit is to have a fully automated way to
deploy in a single command a curated list of prescribed architectures.</p>
<p><a href="https://github.com/kubeinit/kubeinit">KubeInit</a> is opensource, and licensed under
the Apache 2.0 license. The project’s source code is hosted
in <a href="https://github.com/kubeinit/kubeinit">GitHub</a>.</p>
<p><img src="/static/kubeinit/net/thumb.png" alt="" /></p>
<h3 id="initial-hypervisor-status">Initial hypervisor status</h3>
<p>We check both the routing table and the network connections
in the hypervisor host.</p>
<pre><code class="language-bash">[root@nyctea ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway 0.0.0.0 UG 100 0 0 eno1
10.19.41.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
</code></pre>
<pre><code class="language-bash">[root@nyctea ~]# nmcli con show
NAME UUID TYPE DEVICE
System eno1 162499bc-a6fa-45db-ba76-1b45f0be46e8 ethernet eno1
virbr0 4ba12c69-3a8b-42e8-a9dd-bc020fdc1a90 bridge virbr0
eno2 e19725f2-84f5-4f71-b300-469ffc99fd99 ethernet --
enp6s0f0 7348301f-8cae-4ab1-9061-97d7a344699c ethernet --
enp6s0f1 8a96c226-959a-4218-b9f7-c3ab6ee3d02b ethernet --
</code></pre>
<p>As it is possible to see there are two physical network interfaces (eno1, and eno2) for which
only one is actually connected.</p>
<h4 id="initial-network-architecture">Initial network architecture</h4>
<p>The following picture represents the default network layout for a usual deployment.</p>
<p><img src="/static/kubeinit/net/arch01.png" alt="" /></p>
<p>The default deployment will install a multi-master cluster, with one worker node (up to 10).
From the above figure is possible to see:</p>
<ul>
<li>
<p>All cluster nodes are connected to the 10.0.0.0/24 network. This will be the
cluster management network, and the one will use to access the nodes within the hypervisor.</p>
</li>
<li>
<p>The 10.0.0.0/24 network is defined as a Virtual Network Switch implementing
both NAT and DCHP for any interface connected to the <code>kimgtnet0</code> network.</p>
</li>
<li>
<p>All bootstrap, master, and worker nodes are installed with Fedora CoreOS as is
the required OS for OKD > 4.</p>
</li>
<li>
<p>The services machine has installed CentOS 8 with BIND, HAProxy, and NFS.</p>
</li>
<li>
<p>Using DHCP, we assign the following IP mapping based on the MAC address of each node (defined
in the Ansible inventory).</p>
</li>
</ul>
<pre><code class="language-bash"> # Master
okd-master-01 ansible_host=10.0.0.1 mac=52:54:00:aa:6c:b1
okd-master-02 ansible_host=10.0.0.2 mac=52:54:00:59:0e:e4
okd-master-03 ansible_host=10.0.0.3 mac=52:54:00:b4:39:45
# Worker
okd-worker-01 ansible_host=10.0.0.4 mac=52:54:00:61:22:5a
okd-worker-02 ansible_host=10.0.0.5 mac=52:54:00:21:fd:fd
okd-worker-03 ansible_host=10.0.0.6 mac=52:54:00:4c:0a:81
okd-worker-04 ansible_host=10.0.0.7 mac=52:54:00:54:ff:ac
okd-worker-05 ansible_host=10.0.0.8 mac=52:54:00:4a:6b:f6
okd-worker-06 ansible_host=10.0.0.9 mac=52:54:00:40:22:52
okd-worker-07 ansible_host=10.0.0.10 mac=52:54:00:6c:0a:03
okd-worker-08 ansible_host=10.0.0.11 mac=52:54:00:0b:14:f8
okd-worker-09 ansible_host=10.0.0.12 mac=52:54:00:f5:6e:e5
okd-worker-10 ansible_host=10.0.0.13 mac=52:54:00:5c:26:4f
# Service
okd-service-01 ansible_host=10.0.0.100 mac=52:54:00:f2:46:a7
# Bootstrap
okd-bootstrap-01 ansible_host=10.0.0.200 mac=52:54:00:6e:4d:a3
</code></pre>
<blockquote>
<p>The previous deployment can be used for any purpose but it has one limitation,
this limitation is that the endpoints do not have external access.
This means that i.e. https://console-openshift-console.apps.watata.kubeinit.local
can not be accessed from anywhere instead the hypervisor itself.</p>
</blockquote>
<h3 id="extending-the-basic-network-layout">Extending the basic network layout</h3>
<p>Now it will be described a simple way to provide external access to the cluster public endpoints
published in the service machine.</p>
<h4 id="requirements">Requirements</h4>
<ul>
<li>An additional IP address to be mapped to the services machine from an external location.</li>
<li>Creating a network bridge to slave the interface used for the external access.</li>
</ul>
<p>If a user has one extra IP (public or private) it will be enough to configure remote
access to the cluster endpoints.</p>
<p>As long as we have an extra IP it does not matter how many physical interfaces we
have, as we can have multiple IP addresses configured using a single physical NIC.</p>
<h4 id="new-network-layout">New network layout</h4>
<p>This is the resulting network architecture to access remotely our freshly installed OKD cluster.</p>
<p><img src="/static/kubeinit/net/arch02.png" alt="" /></p>
<p>As is visible in the above figure there is an extra connection to the service machine,
connected directly to the virtual bridge slaving a physical interface.</p>
<blockquote>
<p>Our development environment has only one network card connected,
in this case after we create the main switch and slave the
network device, it will lose the assigned IP automatically.
Do not try this using a shell as you will get dropped.</p>
</blockquote>
<h4 id="how-to-enable-the-external-interface">How to enable the external interface</h4>
<p>To deploy this architecture please follow the next steps:</p>
<ol>
<li>Create a virtual bridge slaving the selected physical interface.</li>
<li>Adjust the deployment command.</li>
<li>Run <a href="https://github.com/kubeinit/kubeinit">KubeInit</a>.</li>
<li>Adjust your local Domain Name System (DNS) resolver.</li>
</ol>
<h5 id="step-1-creating-the-virtual-bridge">Step 1 (creating the virtual bridge)</h5>
<h6 id="using-centos8-cockpit">Using CentOS8 cockpit</h6>
<p>We create an initial bridge using the CentOS cockpit,
after losing the IP it will be recovered/reconfigured automatically
(don’t try this from the CLI as you will lose access).</p>
<p>In this case,</p>
<p>Create a bridge called kiextbr0 connected to eno1:</p>
<p><img src="/static/kubeinit/net/cockpit_00.PNG" alt="" /></p>
<p>Click on: Networking -> Add Bridge</p>
<p>Then adjust the bridge configuration options (bridge name and the interface to slave).</p>
<p><img src="/static/kubeinit/net/cockpit_01.PNG" alt="" /></p>
<p>Write: <code>kiextbr0</code> as the bridge name, and select your network interface <code>eno1</code>.</p>
<p>Go to the dashboard and verify that everything is OK.</p>
<p><img src="/static/kubeinit/net/cockpit_02.PNG" alt="" /></p>
<p>Check that the bridge is created correctly and has the IP configured correctly.</p>
<h6 id="manual-bridge-creation">Manual bridge creation</h6>
<p>As an example you can run these steps by the CLI
adjusting your interface and bridge names accordingly.</p>
<pre><code class="language-bash">nmcli connection add ifname br0 type bridge con-name br0
nmcli connection add type bridge-slave ifname enp0s25 master br0
nmcli connection modify br0 bridge.stp no
nmcli connection delete enp0s25
nmcli connection up br0
</code></pre>
<blockquote>
<p><strong><em>NOTE:</em></strong> If you have only one interface
the connection will be dropped and you will.
lose connectivity.</p>
</blockquote>
<h6 id="checking-the-system-status">Checking the system status</h6>
<p>We check again the system status:</p>
<pre><code class="language-bash">[root@nyctea ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway 0.0.0.0 UG 425 0 0 kiextbr0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 kimgtbr0
10.19.41.0 0.0.0.0 255.255.255.0 U 425 0 0 kiextbr0
NAME UUID TYPE DEVICE
kiextbr0 55d0a549-8123-488a-815b-5771b62644d2 bridge kiextbr0
kimgtbr0 3e73e0d9-28bd-4db7-8ccf-be11297e3300 bridge kimgtbr0
System eno1 3251ed0c-706a-463e-aeac-2a57782ce7c1 ethernet eno1
vnet0 4515a0b8-1a20-4414-86b2-2ff5545fcffa tun vnet0
vnet1 5f1b253f-9c38-4637-8a02-222aa5c51be3 tun vnet1
vnet2 e7d466d5-bc2b-47b0-a6ca-5a3825170501 tun vnet2
eno2 190c35fb-1ff0-41ff-b32e-c190f513b2a0 ethernet --
eno3 1b644415-0a91-44a9-bfd0-2279ddca0020 ethernet --
eno4 c99ba8a7-b62c-4b1f-b191-8798f0eff2ff ethernet --
enp6s0f0 11c63800-8cd9-4411-8854-43ced2a464f3 ethernet --
enp6s0f1 be01957b-2933-47df-9793-156fe3b1d767 ethernet --
</code></pre>
<p>We can see we have the new bridge created successfully and it has the IP address
also configured correctly.</p>
<h5 id="step-2-adjusting-the-deployment-command">Step 2 (adjusting the deployment command)</h5>
<p>There are a few variables that need to be adjusted in order to
successfully configure the external interface.</p>
<p>These variables are defined in the libvirt role (the location of these variables will change)
but not their name.</p>
<p><img src="/static/kubeinit/net/config_vars.PNG" alt="" /></p>
<p>The meaning of the variables are:</p>
<ul>
<li>
<p>kubeinit_libvirt_external_service_interface_enabled: true - This will enable
the Ansible configuration of the external interface,
the BIND update, and the additional interface in the
service node.</p>
</li>
<li>
<p>kubeinit_libvirt_external_service_interface.attached: kiextbr0 - This is the
virtual bridge where we will plug the <code>eth1</code> interface of the services machine.
The bridge <code>MUST</code> be created first and slaving the physical interface we will use.</p>
</li>
<li>
<p>kubeinit_libvirt_external_service_interface.dev: eth1 - This is the name of the
external interface we will add to the services machine.</p>
</li>
<li>
<p>kubeinit_libvirt_external_service_interface.ip: 10.19.41.157 - The external IP address
of the services machine.</p>
</li>
<li>
<p>kubeinit_libvirt_external_service_interface.gateway: 10.19.41.254 - The gateway IP address
of the services machine.</p>
</li>
<li>
<p>kubeinit_libvirt_external_service_interface.netmask: 255.255.255.0 - The network mask of the external
interface of the services machine.</p>
</li>
</ul>
<p>After we configure correctly the previous variables we can proceed to run the deployment command.</p>
<h5 id="step-3-run-the-deployment-command">Step 3 (run the deployment command)</h5>
<p>Now we deploy as usual <a href="https://github.com/kubeinit/kubeinit">KubeInit</a>:</p>
<blockquote>
<p>Remember that you can execute this deployment command before
creating the bridge with the CentOS cockpit, the bridge creation
has no impact in how we deploy KubeInit.</p>
</blockquote>
<pre><code class="language-bash">ansible-playbook \
-v \
--user root \
-i ./hosts/okd/inventory \
--become \
--become-user root \
-e "{ \
'kubeinit_libvirt_external_service_interface_enabled': 'true', \
'kubeinit_libvirt_external_service_interface': { \
'attached': 'kiextbr0', \
'dev': 'eth1', \
'ip': '10.19.41.157', \
'mac': '52:54:00:6a:39:ad', \
'gateway': '10.19.41.254', \
'netmask': '255.255.255.0' \
} \
}" \
./playbooks/okd.yml
</code></pre>
<h5 id="step-4-adjust-your-resolvconf">Step 4 (adjust your resolv.conf)</h5>
<p>You must reach the cluster external endpoints by DNS, this means,
the dashboard and any other application deployed
(you can add entries for any registry pointing to the service machine but this can be cumbersome).</p>
<p>For example, configure your local DNS resolver to point to <code>10.19.41.157</code></p>
<pre><code class="language-bash"> [ccamacho@localhost]$ cat /etc/resolv.conf
nameserver 10.19.41.157
nameserver 8.8.8.8
</code></pre>
<p>After that you should be able to access the cluster without any issue and use it for any purpose you have
with the following URL
<a href="https://console-openshift-console.apps.clustername0.kubeinit.local/">https://console-openshift-console.apps.clustername0.kubeinit.local/</a>.</p>
<p>Voilà!</p>
<p><img src="/static/kubeinit/net/dashboard.PNG" alt="" /></p>
<h4 id="final-considerations">Final considerations</h4>
<p>Some of the very interesting changes in BIND is how we manage both external and internal views.</p>
<p><img src="/static/kubeinit/net/bind_views.png" alt="" /></p>
<p>In this case we have an <code>internal</code> and <code>external</code> view that will behave differently depending
on where the requests are originated from.</p>
<p>If a DNS request is created trough the cluster’s external interface, the reply will be
created based on the external view, in this case we only reply with the external HAProxy endpoints
related to the services node, thus, we will only reply with <code>10.19.41.157</code> as it is the only that needs
to be presented externally.</p>
<h4 id="the-end">The end</h4>
<p>If you like this post, please try the code, raise issues, and ask for more details, features or
anything that you feel interested in. Also it would be awesome if you become a stargazer to catch up
updates and new features.</p>
<p>This is the main project <a href="https://github.com/kubeinit/kubeinit">repository</a>.</p>
<p>Happy KubeIniting!</p>
<blockquote>
<p><strong><em>Updated 2020/08/25:</em></strong> First version (draft).</p>
<p><strong><em>Updated 2020/08/26:</em></strong> Published.</p>
<p><strong><em>Updated 2020/10/06:</em></strong> Update in network details.</p>
</blockquote>
A review of the MachineConfig operatorThe latest versions of OpenShift rely on operators to completely manage the cluster and OS state, this state includes for instance, configuration changes and OS upgrades. For example, to install additional packages or changing any configuration file to execute whatever...2020-08-16T00:00:00+00:00https://www.pubstack.com/blog/2020/08/16/a-review-of-the-machineconfig-operatorCarlos Camacho<p>The latest versions of OpenShift rely on operators to completely manage the cluster and OS state,
this <strong>state</strong> includes for instance, configuration changes and OS upgrades.
For example, to install additional packages or changing any configuration file to execute whatever task is
required, the MachineConfig operator should be the one in charge of applying these changes.</p>
<p><img src="/static/machineconfig/machineconfig.png" alt="" /></p>
<p>These configuration changes are executed by an instance of the
‘openshift-machine-config-operator’ pod, which after this new state is reached
the updated nodes will be automatically restarted.</p>
<p>There are several mature and production-ready technologies allowing to automate and apply
configuration changes to the underlying infrastructure nodes, like, Ansible,
Helm, Puppet, Chef, and many others, yet, the MachineConfig operator force
users to adopt this new method and pretty much discard any previously developed
automation infrastructure.</p>
<h3 id="the-machineconfig-operator-is-more-than-installing-packages-and-updating-configuration-files">The MachineConfig operator is more than installing packages and updating configuration files</h3>
<p>I see this MachineConfig operator as a finite state machine where it’s represented
a cluster-wide specific sequential logic to ensure that the cluster’s state is preserved and consistent.
This notion has several and very powerful benefits, like making the cluster resilient to failures
due to unfulfilled conditions in each of the sub-stages part of this finite state machine workflow.</p>
<p>For instance, let the following example show a practical and objective application
of the benefits of this approach.</p>
<p>We assume the following architecture reference:</p>
<ul>
<li>3 master nodes.</li>
<li>1 worker node.</li>
<li>The master nodes are not schedulable.</li>
</ul>
<p>This is a quite simple multi-master deployment with a single worker node
for development purposes, before the cluster was deployed the
master nodes were set as “mastersSchedulable: False” by running
<code>"sed -i 's/mastersSchedulable: true/mastersSchedulable: False/' install_dir/manifests/cluster-scheduler-02-config.yml"</code>.
Now, after deploying the cluster and executing a configuration change in the worker node it will
fail, and let investigate why.</p>
<p>The following yaml file will be applied, which it’s content is correct and it should work out of the box:</p>
<pre><code class="language-bash">cat << EOF > ~/99_kubeinit_extra_config_worker.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
creationTimestamp: null
labels:
machineconfiguration.openshift.io/role: worker
name: 99-kubeinit-extra-config-worker
spec:
osImageURL: ''
config:
ignition:
config:
replace:
verification: {}
security:
tls: {}
timeouts: {}
version: 2.2.0
networkd: {}
passwd: {}
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9iYXNoCnNldCAteAptYWluKCkgewpzdWRvIHJwbS1vc3RyZWUgaW5zdGFsbCBwb2xpY3ljb3JldXRpbHMtcHl0aG9uLXV0aWxzCnN1ZG8gc2VkIC1pICdzL2VuZm9yY2luZy9kaXNhYmxlZC9nJyAvZXRjL3NlbGludXgvY29uZmlnIC9ldGMvc2VsaW51eC9jb25maWcKfQptYWluCg==
verification: {}
filesystem: root
mode: 0755
path: /usr/local/bin/kubeinit_kubevirt_extra_config_script
EOF
oc apply -f ~/99_kubeinit_extra_config_worker.yaml
</code></pre>
<p>The defined MachineConfig object will create a file in
<code>/usr/local/bin/kubeinit_kubevirt_extra_config_script</code> that once executed it will
install a package and disable SElinux in the worker nodes.</p>
<p>Now, let’s check the state of the worker machine config pool.</p>
<pre><code class="language-bash">oc get machineconfigpool/worker
</code></pre>
<p>This is the result:</p>
<pre><code>NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
worker rendered-worker-a9.. False True True 1 0 0 1 12h
</code></pre>
<p>Now it is possible to depict that the operator state is degraded and there is not much more information about it.
Let’s get the status of the machine-config pods.</p>
<pre><code class="language-bash">kubectl get pod -o wide --all-namespaces | grep machine-config
</code></pre>
<p>It is possible to see that all pods are running without issues.</p>
<pre><code>openshift-machine-config-operator etcd-quorum-guard-7bb76959df-5bj7g 1/1 Running 0 11h 10.0.0.2 okd-master-02 <none> <none>
openshift-machine-config-operator etcd-quorum-guard-7bb76959df-jdtbv 1/1 Running 0 11h 10.0.0.3 okd-master-03 <none> <none>
openshift-machine-config-operator etcd-quorum-guard-7bb76959df-sndb2 1/1 Running 0 11h 10.0.0.1 okd-master-01 <none> <none>
openshift-machine-config-operator machine-config-controller-7cbb584655-bfjmh 1/1 Running 0 11h 10.100.0.20 okd-master-01 <none> <none>
openshift-machine-config-operator machine-config-daemon-ctczg 2/2 Running 0 12h 10.0.0.3 okd-master-03 <none> <none>
openshift-machine-config-operator machine-config-daemon-m82gz 2/2 Running 0 12h 10.0.0.2 okd-master-02 <none> <none>
openshift-machine-config-operator machine-config-daemon-qfc82 2/2 Running 0 12h 10.0.0.1 okd-master-01 <none> <none>
openshift-machine-config-operator machine-config-daemon-vwh4d 2/2 Running 0 11h 10.0.0.4 okd-worker-01 <none> <none>
openshift-machine-config-operator machine-config-operator-c98bb964d-5vnww 1/1 Running 0 11h 10.100.0.21 okd-master-01 <none> <none>
openshift-machine-config-operator machine-config-server-g75x5 1/1 Running 0 12h 10.0.0.2 okd-master-02 <none> <none>
openshift-machine-config-operator machine-config-server-kpwqb 1/1 Running 0 12h 10.0.0.3 okd-master-03 <none> <none>
openshift-machine-config-operator machine-config-server-n9q2r 1/1 Running 0 12h 10.0.0.1 okd-master-01 <none> <none>
</code></pre>
<p>Let’s check the logs of the machine-config-daemon pod in the worker node.
This pod has two containers, machine-config-daemon, and oauth-proxy.</p>
<pre><code class="language-bash">kubectl logs -f machine-config-daemon-vwh4d -n openshift-machine-config-operator -c machine-config-daemon
</code></pre>
<p>Now, it is possible to see the actual error in the container execution:</p>
<pre><code>I0816 06:58:42.985762 3240 update.go:283] Checking Reconcilable for config rendered-worker-a9681850fe39078ea0f42bd017922eb7 to rendered-worker-7131e04f110c489a0ad171e719cedc24
I0816 06:58:43.849830 3240 update.go:1403] Starting update from rendered-worker-a9681850fe39078ea0f42bd017922eb7 to rendered-worker-7131e04f110c489a0ad171e719cedc24: &{osUpdate:false kargs:false fips:false passwd:false files:true units:false kernelType:false}
I0816 06:58:43.852961 3240 update.go:1403] Update prepared; beginning drain
E0816 06:58:43.911711 3240 daemon.go:336] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-48g5s, openshift-dns/dns-default-h2lt5, openshift-image-registry/node-ca-9z9zt, openshift-machine-config-operator/machine-config-daemon-vwh4d, openshift-monitoring/node-exporter-m5p2n, openshift-multus/multus-lnsng, openshift-sdn/ovs-5xzqs, openshift-sdn/sdn-vplps
.
.
.
I0816 06:58:43.918261 3240 daemon.go:336] evicting pod openshift-ingress/router-default-796df5847b-9hxzx
E0816 06:58:43.928176 3240 daemon.go:336] error when evicting pod "router-default-796df5847b-9hxzx" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
I0816 07:08:44.981198 3240 update.go:172] Draining failed with: error when evicting pod "router-default-796df5847b-9hxzx": global timeout reached: 1m30s, retrying
E0816 07:08:44.981273 3240 writer.go:135] Marking Degraded due to: failed to drain node (5 tries): timed out waiting for the condition: error when evicting pod "router-default-796df5847b-9hxzx": global timeout reached: 1m30s
</code></pre>
<p>The log shows that the machine config operator failed to drain the worker node before applying the
configuration and executing the restart, as the router-default pod was not able to be
rescheduled in another node. Not being able to schedule again this pod
<code>violates the pod's disruption budget</code>, thus, the operator is now degraded.</p>
<p>Let’s check the router-default pod status:</p>
<pre><code class="language-bash">kubectl get pod -o wide --all-namespaces | grep "router-default"
</code></pre>
<p>It is possible to see that the pod is pending to be scheduled.</p>
<pre><code>openshift-ingress router-default-796df5847b-9hxzx 1/1 Running 0 12h 10.0.0.4 okd-worker-01 <none> <none>
openshift-ingress router-default-796df5847b-h8bm4 0/1 Pending 0 12h <none> <none> <none> <none>
</code></pre>
<p>Let’s check it’s status:</p>
<pre><code class="language-bash">oc describe pod router-default-796df5847b-h8bm4 -n openshift-ingress
</code></pre>
<p>Now, it is possible to confirm that the pod is <code>Pending</code> as there is not any available node to schedule it back again.</p>
<pre><code>Name: router-default-796df5847b-h8bm4
Namespace: openshift-ingress
.
.
.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/4 nodes are available: 1 node(s) were unschedulable, 3 node(s) didn't match node selector.
</code></pre>
<p>We check the nodes status</p>
<pre><code class="language-bash">oc get nodes
</code></pre>
<p>Again, it is possible to see that the MachineConfig operator tried to drain the node but it failed
when rescheduling its pods.</p>
<pre><code>NAME STATUS ROLES AGE VERSION
okd-master-01 Ready master 12h v1.18.3
okd-master-02 Ready master 12h v1.18.3
okd-master-03 Ready master 12h v1.18.3
okd-worker-01 Ready,SchedulingDisabled worker 12h v1.18.3
</code></pre>
<h3 id="why-did-this-happen">Why did this happen?</h3>
<p>Master nodes are not schedulable to handle workloads, this was configured when the cluster was deployed.
So pretty much the operator didn’t have enough room to reschedule the pods in other nodes.</p>
<p>The benefits of this approach (using the MachineConfig operator) are uncountable as the operator is smart enough to
avoid services breaking when it finds that a configuration change is not able to
get the system back to a consistent state.</p>
<p><img src="/static/machineconfig/rabbithole.jpg" alt="" /></p>
<h3 id="but-not-everything-is-as-perfect-as-it-sounds">But, not everything is as perfect as it sounds…</h3>
<p>Ignition files are used to apply these configuration changes and it’s json representation is not human-readable at all, for this we use Fedora CoreOS Configuration (FCC) files in YAML format, then internally these yaml files are converted into an Ignition (JSON) file by the Fedora CoreOS Config Transpiler, which is a tool that produces a JSON Ignition file from the YAML FCC file.</p>
<p>There is a huge limitation in the resources that can be defined, it is only supported storage, system.d services, and users, so, for executing anything the user will have to render a script that must be called once by a systemd service after the node restarts. This, after using for many many years technologies like Ansible, Puppet, or Chef, it looks like a hacky and dirty approach for users to apply their custom configurations.</p>
<p>Another thing, debugging, if there is a problem with your MachineConfig object you might see only this <strong>degraded</strong> state, forcing you to dig into the containers logs and hopefully find the source of any issue you might have.</p>
<p>I believe there is a lot of room for improvements in the MachineConfig operator, I would love to see an Ansible interface to be able to plug-in my configuration changes by the openshift-machine-config-operator pod. Also, it was showed that the operator improves the system’s resiliency by impeding that a configuration change breaks what we defined as a <strong>cluster’s consistent state</strong>.</p>
The easiest and fastest way to deploy an OKD 4.5 cluster in a Libvirt/KVM hostLong story short… We will deploy an OKD 4.5 cluster in ~30 minutes (3 controllers, 1 to 10 workers, 1 service, and 1 bootstrap node) using one single command in around 30 minutes using a tool called KubeInit. Note 2021/10/13:...2020-07-31T00:00:00+00:00https://www.pubstack.com/blog/2020/07/31/the-fastest-and-simplest-way-to-deploy-okd-openshift-4-5Carlos Camacho<p>Long story short… <strong>We will deploy an OKD 4.5 cluster in ~30 minutes (3 controllers, 1 to 10 workers, 1 service, and 1 bootstrap node) using one single command in around 30 minutes using a tool called <a href="https://github.com/kubeinit/kubeinit">KubeInit</a>.</strong></p>
<blockquote>
<p><strong><em>Note 2021/10/13:</em></strong> DEPRECATED - This tutorial only works with
<a href="https://github.com/Kubeinit/kubeinit/releases/tag/1.0.2">kubeinit 1.0.2</a> make
sure you use this version of the code if you are following this tutorial, or
<a href="https://docs.kubeinit.org/">refer to the documentation</a> to use the latest code.</p>
</blockquote>
<p><img src="/static/kubeinit/okd-libvirt.png" alt="" /></p>
<p>I wrote so much automation in the meantime I worked/learned/practiced in OpenStack/RHOSP/Kubernetes/Openshift/OKD
in the last 2Y, but suddenly I “lost” the machine where I hosted all these valuable code snippets.</p>
<p>With all this… I had to quickly invest some time to put together all that code. The first part is related to K8s/OKD
and I created a small project called KubeInit “The KUBErnetes INITiator” to share it with the world.</p>
<p>The first (and only for now) playbook will deploy in a single command a fully operational OKD 4.5 cluster with 3
master nodes, 1 compute nodes (configurable from 1 to 10 nodes), 1 services node, and 1 dummy bootstrap node.
The services node has installed HAproxy, Bind, Apache httpd, and NFS to host some of the external required cluster services.</p>
<p><img src="/static/kubeinit/fast.jpg" alt="" /></p>
<hr />
<h2 id="introduction">Introduction</h2>
<p>What is OpenShift?</p>
<blockquote>
<p>Red Hat OpenShift is an open source container application platform based on the Kubernetes container orchestrator for enterprise application development and deployment.</p>
<p>– <cite>https://www.openshift.com/</cite></p>
</blockquote>
<p>There are multiple ways of deploying the Community Distribution of Kubernetes that powers Red Hat OpenShift (<a href="https://www.okd.io/">OKD</a>) depending on the underlying infrastructure where it will be installed. In this particular blog post, we will deploy it on top of a KVM host using Libvirt. The initial upstream support is described in the <a href="https://github.com/openshift/installer/tree/fcos/docs/dev/libvirt">official upstream OpenShift documentation</a>, but as you can see, it involves a high number of manual steps prone to manual errors, and most important, outdated references when the deployment workflow changes.</p>
<p>In this case, we will use a project based in Ansible playbooks and roles for deploying and configuring multiple Kubernetes distributions, the project is called <a href="https://github.com/kubeinit/kubeinit">KubeInit</a>.</p>
<h2 id="requirements">Requirements</h2>
<ul>
<li>A CentOS 8 fresh deployed host for hosting all the guests.</li>
<li>RAM, depending on how many compute nodes this can go up to 384GB (the smallest amount required is around 64GB), configure the node’s resources in the <a href="https://github.com/kubeinit/kubeinit/blob/master/hosts/okd/inventory#L8">inventory file</a>.</li>
<li>Be able to log in as <code>root</code> in the hypervisor node without using passwords (using SSH certificate authentication).</li>
<li>Reach the hypervisor node using the hostname <code>nyctea</code>, <a href="https://github.com/kubeinit/kubeinit/blob/master/hosts/okd/inventory#L56">you can change this in the inventory</a> or add an entry in your <code>/etc/hosts</code> file.</li>
</ul>
<h2 id="deploy">Deploy</h2>
<p>That’s it, now, let’s execute the deployment command:</p>
<pre><code class="language-bash">git clone https://github.com/kubeinit/kubeinit.git
cd kubeinit
ansible-playbook \
--user root \
-v -i ./hosts/okd/inventory \
--become \
--become-user root \
./playbooks/okd.yml
</code></pre>
<p>You should get something like:</p>
<pre><code>[ccamacho@wakawaka kubeinit]$ time ansible-playbook \
--user root \
-i ./hosts/okd/inventory \
--become \
--become-user root \
./playbooks/okd.yml
Using /etc/ansible/ansible.cfg as config file
PLAY [Main deployment playbook for OKD] ********************************************
TASK [Gathering Facts] *************************************************************
ok: [hypervisor-01]
.
.
.
"NAME STATUS ROLES AGE VERSION",
"okd-master-01 Ready master 16m v1.18.3",
"okd-master-02 Ready master 15m v1.18.3",
"okd-master-03 Ready master 12m v1.18.3",
"okd-worker-01 Ready worker 6m12s v1.18.3"
]}]}}
PLAY RECAP *************************************************************************
hypervisor-01: ok=83 changed=39 unreachable=0 failed=0 skipped=6 rescued=0 ignored=3
real 33m49.483s
user 2m30.920s
sys 0m19.678s
</code></pre>
<p>A ready to use OKD 4.5 cluster in ~30 minutes!</p>
<p>What you just executed should give you an operational OKD 4.5 cluster with 3 master nodes, 1 compute node (configurable from 1 to 10 nodes), 1 services node, and 1 dummy bootstrap node. The services node has installed HAproxy, Bind, Apache httpd, and NFS to host some of the external required cluster services.</p>
<p>Now, ssh into your hypervisor node and check the cluster status from the services machine.</p>
<pre><code class="language-bash">ssh root@nyctea
ssh root@10.0.0.100
# This is now the service node (check the Ansible inventory for IPs and other details)
export KUBECONFIG=~/install_dir/auth/kubeconfig
oc get pv
oc get nodes
</code></pre>
<p>The root password of the services machine is <a href="https://github.com/kubeinit/kubeinit/blob/master/playbooks/okd.yml#L54">defined as a variable in the playbook</a>, but the public key of the hypervisor root user is deployed across all the cluster nodes, so, you should be able to connect to any node from the hypervisor machine using SSH certificate authentication.
Connect as the <code>root</code> user for the services machine (because is CentOS based) or as the <code>core</code> user to any other node (CoreOS based), using the IP addresses defined in the inventory file.</p>
<p>There are reasons for having this password-based access to the services node. Sometimes we need to connect to the services machine when we deploy for debugging purposes, in this case, if we don’t set a password for the user we won’t be able to log in using the console. Instead, for all the CoreOS nodes, once they are bootstrapped correctly/automatically there is no need to log in using the console, just wait until they are deployed to connect to them using SSH.</p>
<h2 id="final-thoughts">Final thoughts</h2>
<p><a href="https://github.com/kubeinit/kubeinit">KubeInit</a> is a simple and intuitive way to show to potential users and customers how easy an OpenShift (OKD) cluster can be deployed, managed, and used for any purpose they might require (production or development environments). Once they have the environment deployed then it’s always easier to learn how it works, hack it, and even start contributing to the upstream community, if you are interested in this last part, please read the <a href="https://www.okd.io/#contribute">contribution page</a> from the official OKD website.</p>
<p>All the Ansible automation is hosted in <a href="https://github.com/kubeinit/kubeinit/">https://github.com/kubeinit/kubeinit/</a>.</p>
<p><img src="/static/kubeinit/happy.jpg" alt="" /></p>
<hr />
<p>The code is not perfect by any mean but is a good example of how to use a libvirt host to run your OKD cluster and it’s incredibly
easy to improve and add other roles and scenarios.</p>
<p>Next steps, I’ll clean all the lint nits around…</p>
<p>This is the GitHub repository <a href="https://github.com/kubeinit/kubeinit/">https://github.com/kubeinit/kubeinit/</a>.</p>
<p>Please if you like it, add some comments, test it, use it, hack it, break it, or become a stargazer ;)</p>
Stay at home!For the people you love, please stay at home! COVID-19 is a respiratory illness (which affects breathing) caused by a new coronavirus. Symptoms can range from mild, such as a sore throat, to severe, such as pneumonia. Most people will...2020-03-24T00:00:00+00:00https://www.pubstack.com/blog/2020/03/24/podCarlos Camacho<p>For the people you love, please stay at home!</p>
<p><img src="/static/pod/stayathome.png" alt="" /></p>
<p>COVID-19 is a respiratory illness (which affects breathing)
caused by a new coronavirus.</p>
<p>Symptoms can range from mild, such as a sore throat, to severe,
such as pneumonia. Most people will not need medical attention for
their symptoms. Together we can slow the spread and protect those at higher
risk of severe illness and our health care workers from getting sick.</p>
<p><br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
TripleO deep dive session #14 (Containerized deployments without paunch)This is the 14th release of the TripleO “Deep Dive” sessions Thanks to Emilien Macchi for this deep dive session about the status of the containerized deployment without Paunch. You can access the presentation. So please, check the full session...2020-02-18T00:00:00+00:00https://www.pubstack.com/blog/2020/02/18/tripleo-deep-dive-session-14Carlos Camacho<p>This is the 14th release of the <a href="http://www.tripleo.org/">TripleO</a>
“Deep Dive” sessions</p>
<p>Thanks to <a href="http://my1.fr/blog">Emilien Macchi</a>
for this deep dive session about the status of the containerized deployment without Paunch.</p>
<p>You can access the <a href="https://docs.google.com/presentation/d/1dndHde25r8MPSdakLp9y5ztL3d6jXmCy-JfhIc6bJbo/edit">presentation</a>.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=D18RaSBGyQU">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/D18RaSBGyQU" frameborder="0" allowfullscreen=""></iframe>
</div>
<p><br />
<br /></p>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a>
to have access to all available content.</p>
Badgeboard - GitHub actions, where is my CI dashboard!A widely used term in the agile world is the information radiator, which refers to display the project’s critical information as simple as possible. These information radiators improve the team’s communication by amplifying pieces of data to get a better...2019-12-04T00:00:00+00:00https://www.pubstack.com/blog/2019/12/04/github-actions-where-is-my-ci-dashboardCarlos Camacho<p>A widely used term in the agile world is the
information radiator, which refers to display
the project’s critical information as simple
as possible. These information radiators improve
the team’s communication by amplifying pieces of
data to get a better notion of self-awareness.</p>
<h2 id="tldr">TL;DR;</h2>
<p>If you just want to go straight to the solution
of how to convert SVG badges to a widget-based
CI dashboard, just go to the
<a href="https://github.com/pystol/badgeboard">Badgeboard</a>
repository or open the
<a href="https://badgeboard.pystol.org">demo</a>.</p>
<p>Otherwise, continue reading.</p>
<p><img src="/static/badgeboard/01_build_monitor.png" alt="" /></p>
<p>If you are beginning to apply agile methodologies
in your team, a good information radiator can
be for example a CI status dashboard.</p>
<p>The purpose of this information radiators, as the name
implies, is to radiate information. It is something that
people know about it and can see it easily. Keep in mind
that a good information radiator will adapt to the
needs of the project throughout its life, so try not to
invest too much time in its initial design, and make sure
that it can be easily changed/fixed/used/improved.</p>
<p>Some features of these information radiators:</p>
<ul>
<li><strong>It reflects the now</strong>: Information radiators always
show what is going on (if things are going north or south).
They help us see what matters now to the team and what
to focus on in most of the cases when we hit i.e. regressions.</li>
<li><strong>Minimum maximum value information</strong>: Simple
and highly valuable. The more information, the less
focus on important information, and more effort to
maintain the panel.</li>
<li><strong>Must be alive</strong>: This information artifact should be
updated each time. As soon as reality changes, the
artifact status should also change.</li>
</ul>
<hr />
<h1 id="ci-dashboards">CI Dashboards</h1>
<p>CI dashboards are a graphical representation of the
continuous integration test results, usually HTML
based and displaying in colors (red, yellow and green)
the actual tests running results.</p>
<p><img src="/static/badgeboard/02_intro.png" alt="" /></p>
<hr />
<h1 id="github-badges">GitHub badges</h1>
<p>We can see the status badges as a brief summary of
the CI pipeline status. Badges<a href="https://docs.gitlab.com/ee/user/project/badges.html">1</a> are a unified way
to present condensed pieces of information about your
projects.
They are also considered as any visual token
of achievement, affiliation, authorization, or other trust
relationship.</p>
<p>They consist of a small image and
additionally a URL that the image points to. Examples
for badges can be the pipeline status, test coverage,
or ways to contact the project maintainers.</p>
<p><img src="/static/badgeboard/03_badges.png" alt="" /></p>
<hr />
<h1 id="what-now">What now?</h1>
<p>We introduce a tool to convert SVG badges to
CI dasboards
(<a href="https://github.com/pystol/badgeboard">Badgeboard</a>).</p>
<h2 id="github-actions---no-ci-dashboard-by-default-">GitHub actions -> No CI dashboard by default :(</h2>
<p>I really liked the big dashboard view printed on a big
screen so everyone can see it in a quick and easy manner.
So, if we start using i.e. GitHub actions we lose the
ability to have this graphical representation towards a
badge based view.</p>
<h2 id="yehi-here-we-have-badgeboard">Yehi!!! Here we have badgeboard!!!</h2>
<p><a href="https://github.com/pystol/badgeboard">Badgeboard</a>
is an awesome information radiator
to show the status of the badges you
have in your project as a widget-based
dashboard, in particular it’s
the main CI dashboard of
<a href="https://github.com/pystol/pystol">Pystol</a>.</p>
<p>Is a very simple tool that
converts the information
inside any SVG badge you define from any source
in a widget-based dashboard.</p>
<p><img src="/static/badgeboard/04_badgeboard.png" alt="" /></p>
<h2 id="demo">Demo</h2>
<p>Just <a href="https://badgeboard.pystol.org/">open the index.html</a>
file and see how the dashboard is rendered.</p>
<h2 id="requirements">Requirements</h2>
<p>None! Just clone the repo and open the index.html file
in your favorite browser.</p>
<p>Once you have a copy, make the adjustments to the configuration
file located in <strong>assets/data_source/badges_list.js</strong> to use your
own badges.</p>
<p><strong>Note:</strong> Due to CORS restrictions, badgeboard uses a
<a href="https://cors-anywhere.herokuapp.com/">proxy</a>
to add cross-origin headers when building the widgets panel.
Check additional information about the CORS proxy on
<a href="https://www.npmjs.com/package/cors-anywhere">NPM</a>.</p>
<h2 id="how-it-works">How it works</h2>
<p>We capture the badges list (SVG files) and
we read the color information from a single pixel,
from there, depending on the color of the pixel the
widget is painted with its corresponding color.</p>
<p><img src="/static/badgeboard/05_measure.png" alt="" /></p>
<p>This would be the usual view of the project badges.</p>
<p><img src="/static/badgeboard/06_badges.png" alt="" /></p>
<h2 id="adding-your-badges-and-colors">Adding your badges and colors</h2>
<p>Use the <strong>coordinates_testing.html</strong> file
to determine based on the SVG coordinates
the RGB color to be used in the JS configuration
file.</p>
<p>To do so, copy the link to your badge, find the
badge example in the file, replace it with yours,
open the file in a browser, get the console logs
and move around the mouse over the badge to see
the coordinates and the RBG color that matches it.</p>
<h2 id="adding-custom-color-badges">Adding custom color badges</h2>
<p>To add new colors, edit the <strong>assets/css/custom.css</strong> file and
add new color definitions for the widgets.
Once you define the new color, in the configuration file
called <strong>assets/data_source/badges_list.js</strong>
use the new color like in the following example.</p>
<pre><code class="language-bash">colors:[['<new_color_definition','<matching_rgb_from_the _badge>'],['status-good','48,196,82']],
</code></pre>
<h2 id="troubleshooting">Troubleshooting</h2>
<p>If the board does not render correctly (No widgets at all)
it’s for sure that you refreshed too many times the page.
We use a <strong>CORS</strong> proxy to add cross-origin headers
when building the widgets panel.</p>
<p>The requests it can handled are limited in order to avoid
crashing the container, so we can all use it.</p>
<p><strong>Please read the requirements</strong> and use your own
<a href="https://www.npmjs.com/package/cors-anywhere">NPM proxy</a>
so these restrictions go away.</p>
<h2 id="references">References</h2>
<p>We use both <a href="https://github.com/smashing/smashing">smashing</a>
and <a href="https://github.com/ducksboard/gridster.js">gridster</a>
to create the dashboard and its widgets.</p>
<h2 id="license">License</h2>
<p>Badgeboard is part of <a href="https://github.com/pystol/pystol">Pystol</a>
and <a href="https://github.com/pystol/pystol">Pystol</a> is
open source software licensed under the
<a href="LICENSE">Apache license</a>.</p>
<h1 id="next-steps">Next steps</h1>
<p>It would be awesome to get some feedback around the tool, so,
please feel free to file issues, pull requests or comments
in this post or in the <a href="https://github.com/pystol/badgeboard">Badgeboard</a>
repository.</p>
<h2 id="list-of-to-dos">List of TO-DOs</h2>
<p>There are still some bits to fix in <a href="https://github.com/pystol/badgeboard">Badgeboard</a>
for example:</p>
<ul>
<li><del>Make the link from the widgets to work.</del></li>
<li>Move common hardcoded bits into variables for an easier update.</li>
<li>Improve the documentation.</li>
</ul>
<div style="font-size:10px">
<blockquote>
<p><strong>Updated 2019/12/04:</strong> Initial version.</p>
</blockquote>
</div>
Oil painting and Minikube - Installing Minikube in Centos 7Today I got some time to do some oil painting and reading about techy stuff :) This post is a brief summary of the deployment steps for installing Minikube in a Centos 7 baremetal machine, and, to show you my...2019-10-13T00:00:00+00:00https://www.pubstack.com/blog/2019/10/13/oil-painting-and-installing-minikube-in-centos-7Carlos Camacho<p>Today I got some time to do some oil painting and reading about techy stuff :)</p>
<p>This post is a brief summary of the
deployment steps for installing Minikube in a Centos 7
baremetal machine, and, to show you my painting (check the fedora!).</p>
<p><img src="/static/Terraza-En-Grecia-by-Carlos-Camacho.jpg" alt="" /></p>
<p>The following steps need to run in the Hypervisor machine
in which you will like to have your Minikube deployment.</p>
<p>You need to execute them one after the other,
the idea of this recipe is to
have something just for copying/pasting.</p>
<p>The usual steps are:</p>
<p><strong>01 - Prepare the hypervisor node.</strong></p>
<p>Now, let’s install some dependencies.
Same Hypervisor node, same <code>root</code> user.</p>
<pre><code class="language-bash"># In this dev. env. /var is only 50GB, so I will create
# a sym link to another location with more capacity.
sudo mkdir -p /home/libvirt/
sudo ln -sf /home/libvirt/ /var/lib/libvirt
sudo mkdir -p /home/docker/
sudo ln -sf /home/docker/ /var/lib/docker
# Install some packages
sudo yum install dnf -y
sudo dnf update -y
sudo dnf groupinstall "Virtualization Host" -y
sudo dnf install libvirt qemu-kvm virt-install virt-top libguestfs-tools bridge-utils -y
sudo dnf install git lvm2 lvm2-devel -y
sudo dnf install libvirt-python python-lxml libvirt curl-y
sudo dnf install binutils qt gcc make patch libgomp -y
sudo dnf install glibc-headers glibc-devel kernel-headers -y
sudo dnf install kernel-devel dkms bash-completion -y
sudo dnf install nano wget -y
sudo dnf install python3-pip -y
</code></pre>
<p><strong>02 - Check that the kernel modules are OK.</strong></p>
<pre><code class="language-bash"># Check the kernel modules are OK
sudo lsmod | grep kvm
</code></pre>
<p><strong>03 - Enable libvirtd, disable SElinux xD and firewalld.</strong></p>
<pre><code class="language-bash"># Enable libvirtd
sudo systemctl start libvirtd
sudo systemctl enable libvirtd
# Disable selinux & stop firewall as needed.
setenforce 0
perl -pi -e 's/SELINUX\=enforcing/SELINUX\=disabled/g' /etc/selinux/config
systemctl stop firewalld
systemctl disable firewalld
</code></pre>
<p><strong>04 - Install Minikube.</strong></p>
<pre><code class="language-bash">#Install minikube
/usr/bin/curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube
cp -p minikube /usr/local/bin && rm -f minikube
# Create the repo for kubernetes
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# Install kubectl
sudo dnf install kubectl -y
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
</code></pre>
<p><strong>05 - Create the toor user (from the Hypervisor node, as root).</strong></p>
<pre><code class="language-bash">sudo useradd toor
echo "toor:toor" | sudo chpasswd
echo "toor ALL=(root) NOPASSWD:ALL" \
| sudo tee /etc/sudoers.d/toor
sudo chmod 0440 /etc/sudoers.d/toor
sudo su - toor
cd
mkdir .ssh
ssh-keygen -t rsa -N "" -f .ssh/id_rsa
</code></pre>
<p>Now, follow as the <code>toor</code> user and prepare the Hypervisor node
for Minikube.</p>
<p><strong>06 - Install Docker.</strong></p>
<p>We will like to also use docker in the Hypervisor node
for creating images and debugging purposes.</p>
<pre><code class="language-bash"># Install docker
sudo dnf install docker -y
sudo usermod --append --groups dockerroot toor
sudo tee /etc/docker/daemon.json >/dev/null <<-EOF
{
"live-restore": true,
"group": "dockerroot"
}
EOF
sudo systemctl start docker
sudo systemctl enable docker
</code></pre>
<p><strong>07 - Finish the Minikube configuration.</strong></p>
<pre><code class="language-bash"># Add to bashrc in toor user
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
# We add toor to the libvirtd group
sudo usermod --append --groups libvirt toor
</code></pre>
<p><strong>08 - Start Minikube.</strong></p>
<pre><code class="language-bash">minikube start --memory=65536 --cpus=4 --vm-driver kvm2
export no_proxy=$no_proxy,$(minikube ip)
nohup kubectl proxy --address='0.0.0.0' --port=8001 --disable-filter=true &
sleep 30
minikube addons enable dashboard
nohup minikube dashboard &
minikube addons open dashboard
</code></pre>
<p>The Minikube instance should be reachable from the following URL:</p>
<p>http://machine_ip:8001/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/</p>
<pre><code class="language-bash"># To stop/delete
kubectl delete deploy,svc --all
minikube stop
minikube delete
</code></pre>
<p><strong>09 - Minikube cheat sheet.</strong></p>
<pre><code class="language-bash"># set & get current context of cluster
kubectl config use-context minikube
kubectl config current-context
# fetch all the kubernetes objects for a namespace
kubectl get all -n kube-system
# display cluster details
kubectl cluster-info
# set custom memory and cpu
minikube config set memory 4096
minikube config set cpus 2
# fetch cluster ip
minikube ip
# ssh to the minikube vm
minikube ssh
# display addons list and status
minikube addons list
# exposes service to vm & retrieves url
minikube service elasticsearch
minikube service elasticsearch --url
</code></pre>
<div style="font-size:10px">
<blockquote>
<p><strong>Updated 2019/10/13:</strong> Initial version.</p>
<p><strong>Updated 2019/10/15:</strong> Install also docker in the hypervisor.</p>
</blockquote>
</div>
Automated weekly reportsRemote work is a trend among Senior or hi-performant roles. In general when working remotely it might be hard to measure or be accountable for all the work we/you usually do. Some of these tasks can be among user escalations,...2019-07-07T00:00:00+00:00https://www.pubstack.com/blog/2019/07/07/remote-automated-reportCarlos Camacho<p>Remote work is a trend among
Senior or hi-performant roles.</p>
<p><img src="/static/1and1s/its-time-for-weekly-reports.jpg" alt="" /></p>
<p>In general when working remotely it might be hard to measure or be
accountable for all the work we/you usually do.</p>
<p>Some of these tasks can be among user escalations,
preparing development environments, reviews, writing code,
checking unit tests, verifying CI systems status,
meetings, have 1and1 with your associates among many others…</p>
<p>And its really easy to miss the track or a log about these
tasks we want to actually report.</p>
<p>In this particular case I will explain how to generate automated
reports with the tasks you have logged in a Trello
board using a google docs template, also we will fetch data from
Bugzilla, Launchpad and Storyboard.</p>
<h3 id="resources">Resources</h3>
<ul>
<li>The main <a href="https://docs.google.com/document/d/1qh7vuC8vPTum_BItCm5O0c6DmyXGXK-NABBXaoNuRMM/edit">google doc</a>
we will use for this how-to.</li>
<li>A code repository in <a href="https://github.com/ccamacho/gdocsreport">GitHub</a>
with the JS/HTML code used inside the google docs
(its all integrated to the google docs, so is not mandatory to use this, just read the <a href="https://github.com/ccamacho/gdocsreport/blob/master/README.md">README</a>).</li>
</ul>
<p><img src="/static/1and1s/00_google_docs_menu.PNG" alt="" /></p>
<h3 id="how-to">How to</h3>
<p>If you open the <a href="https://docs.google.com/document/d/1qh7vuC8vPTum_BItCm5O0c6DmyXGXK-NABBXaoNuRMM/edit">google document</a> and click
“Main menu” -> “Scrum” you will see the following options:</p>
<ul>
<li>Create today’s agenda.</li>
<li>Create tomorrow’s agenda.</li>
<li>Create today’s agenda with Trello items.</li>
<li>Generate activity report.</li>
</ul>
<h3 id="create-todays-and-tomorrows-agenda">Create today’s and tomorrow’s agenda</h3>
<p>These two options are really simple to explain, we will just
generate an enumerated text section to write anything we need.</p>
<p><img src="/static/1and1s/01_empty_agendas.PNG" alt="" /></p>
<p>You will need to grant permissions to the script, the scripts are
located in
<strong>“Main menu” -> “Tools” -> “Script editor”</strong>.</p>
<p>There you will be able to see the code we can run from this
google document (we will speak about the code later).</p>
<p><img src="/static/1and1s/02_tomorrow_empty_agenda.png" alt="" /></p>
<h3 id="create-todays-agenda-with-trello-items">Create today’s agenda with Trello items.</h3>
<p>In this case we will fetch the content of three lists in a Trello board.</p>
<p>In particular I have created an example Trello board for this tutorial</p>
<p><img src="/static/1and1s/03_trello_dashboard.PNG" alt="" /></p>
<p>The board in question is private now (you can set it up as public, but is
private to show how to configure the Key and Token for Trello, otherwise
you will not be able to access it).</p>
<p>When you click on the option <strong>Create today’s agenda with Trello items.</strong></p>
<p><img src="/static/1and1s/04_index_create_agenda_with_trello_items.png" alt="" /></p>
<p>We will generate <strong>In the place you have the cursor</strong> the agenda pulling the
content of the board lists.</p>
<p><img src="/static/1and1s/05_index_create_agenda_with_trello_items.png" alt="" /></p>
<p>As you can see we have in the generated agenda the following information
based on each list.
In this case we have three lists, <strong>To Do</strong>, <strong>Doing</strong>, and <strong>Done</strong>, and in
each list we have:</p>
<ul>
<li>Assignees</li>
<li>Description.</li>
<li>Tags.</li>
<li>Link to the Trello card.</li>
</ul>
<p>Now, to customize this, please <strong>Create your own copy of the <a href="https://docs.google.com/document/d/1qh7vuC8vPTum_BItCm5O0c6DmyXGXK-NABBXaoNuRMM/edit">google document</a>!!</strong>
and proceed to edit it’s content by clicking:</p>
<p><strong>“Main menu” -> “Tools” -> “Script editor”</strong></p>
<p><img src="/static/1and1s/06_google_script_editor.PNG" alt="" /></p>
<p>The section in which we will focus to customize are the following parameters.</p>
<p><img src="/static/1and1s/07_google_script_parameters.PNG" alt="" /></p>
<p><strong>You can update and modify the script as you wish</strong></p>
<p>If you open your board appending a <strong>.json</strong> in the URL you will fetch
the IDs of the lists we need to display.</p>
<p>For example in this example will be:</p>
<pre><code>https://trello.com/b/L8t9rCOz/1and1.json
</code></pre>
<p>In this case the variables <strong>TRELLO KEY</strong>, and <strong>TRELLO_TOKEN</strong> will be needed if the board is private,
otherwise you won’t need them at all.</p>
<pre><code>var TRELLO_KEY = '8b164347d9cf6a1026b5d20dc8556620';
var TRELLO_TOKEN = '4a64be415e2f7d128d2543fbbddd2b8ffd33d3ead6921803668ca39fe715f5cd';
</code></pre>
<p>The following parameters are:</p>
<ul>
<li>
<p><strong>TRELLO_LIST_ID</strong>: The IDs of the lists we will fetch the content.</p>
</li>
<li>
<p><strong>TRELLO_TITLES</strong>: The titles we will add to the report (the order should match, first list ID will be displayed under the first title and so on).</p>
</li>
<li>
<p><strong>TRELLO_USER_FILTER</strong>: Filter the cards based on the assignee name, here if this is a report, it might be a good idea to have your
manager name and your name. Use the name you have in Trello. (see the example)</p>
</li>
</ul>
<pre><code>var TRELLO_LIST_ID = ["5d20bd50eb63e24f7a0c8744", "5d20bd50eb63e24f7a0c8745","5d20bd50eb63e24f7a0c8746"]; //To Do, Doing, Done
var TRELLO_TITLES = ["To Do", "Doing","Done",]; //To Do, Doing, Done
var TRELLO_USER_FILTER = ["Carlos Camacho", "Pubstack"]; //Only display these people cards
</code></pre>
<p>You will need to generate a <strong>Developer key</strong> and a <strong>Token</strong>.</p>
<p><img src="/static/1and1s/08_key.PNG" alt="" /></p>
<p>Use the Key generated.</p>
<p><img src="/static/1and1s/09_key_view.PNG" alt="" /></p>
<p>Then you need to generate a token to be able to interact with Trello from the Google docs scripts.</p>
<p><img src="/static/1and1s/10_token.PNG" alt="" /></p>
<p>Copy the Token.</p>
<p><img src="/static/1and1s/11_token_view.PNG" alt="" /></p>
<p>Replace both <strong>TRELLO KEY</strong>, and <strong>TRELLO_TOKEN</strong> with the values you now have.</p>
<h3 id="generate-activity-report">Generate activity report.</h3>
<p>This section will generate an activity report based on quarters, and this will have
a little bit more of information:</p>
<ul>
<li>Stackalytics.</li>
<li>Launchpad.</li>
<li>Storyboard.</li>
<li>Bugzilla.</li>
</ul>
<p>The information retrieved here is not private as is base in upstream metrics, so
you can just use mine as an example.</p>
<p>Basically based in the following configuration parameters:</p>
<pre><code>/* Stackalytics variables */
var STACKALYTICS_USER= "ccamacho";
/* Bugzilla variables */
var BZ_HOST = "https://bugzilla.redhat.com";
var BZ_STATUS = "bug_status=NEW&bug_status=ASSIGNED&bug_status=POST&bug_status=MODIFIED&bug_status=ON_DEV&bug_status=ON_QA&bug_status=VERIFIED&bug_status=RELEASE_PENDING";
var BZ_EMAIL = "ccamacho%40redhat.com";
/* Launchpad variables */
var LAUNCHPAD_USER = "ccamacho";
/* Storyboard variables */
var STORYBOARD_USER = "3328";
</code></pre>
<p>Those parameters basically describe the data we will fetch, i.e. the BZ query, the BZ email and your Launchpad and Storyboard user ID.</p>
<p>Then when you generate the activity report you will filter by dates, in my particular case I based the reports on each quarter.</p>
<p><img src="/static/1and1s/13_generate_quarterly_report_selection.png" alt="" /></p>
<p>The code to configure the date filter is also stored in the google doc.</p>
<p><img src="/static/1and1s/12_quarterly_report.png" alt="" /></p>
<p>And then you can see the activity report generated correctly.</p>
<p><img src="/static/1and1s/14_activity_report.png" alt="" /></p>
<p>Again, if you don’t trust to create a copy of the document
and use it directly because of the
google apps permissions requirement,
you can always get the code in <a href="https://github.com/ccamacho/gdocsreport">GitHub</a>
and follow the <a href="https://github.com/ccamacho/gdocsreport/blob/master/README.md">README</a></p>
<p>This is a nice way of keeping up reported your contributions in the team in an easy and automated way.
Having a record of all the tasks you did is a really nice way of i.e. make a
yearly retrospective to share your achievements.</p>
<p><img src="/static/1and1s/weekly_reports.jfif" alt="" /></p>
<div style="font-size:10px">
<blockquote>
<p><strong>Updated 2019/07/07:</strong> Initial version.</p>
<p><strong>Updated 2019/07/08:</strong> Fixed some nits.</p>
</blockquote>
</div>
Advanced Deployment with Red Hat OpenShift - RetrospectiveI had the opportunity to attend last week to a training session in the office about Advanced Deployment with Red Hat OpenShift. The course in general covers installing Red Hat OpenShift Container Platform in an HA environment or without an...2019-07-02T00:00:00+00:00https://www.pubstack.com/blog/2019/07/02/advanced-deployment-with-red-hat-openshift-retroCarlos Camacho<p>I had the opportunity to attend last week to a training session
in the office about Advanced Deployment with Red Hat OpenShift.
The course in general covers installing Red Hat OpenShift Container
Platform in an HA environment or without an internet connection.</p>
<p><img src="/static/Red-Hat-OpenShift-4.png" alt="" /></p>
<p>Other topics include networking and security configuration,
and management skills using Red Hat OpenShift Container Platform.
In theory, after completing the course, you should be able to:</p>
<ul>
<li>Describe and install OpenShift in an HA environment.</li>
<li>Describe and configure OpenShift Machines.</li>
<li>Describe and configure networking including creating network policies to secure applications.</li>
<li>Describe and configure the OpenShift Scheduler.</li>
<li>Protect the platform using quotas and limits.</li>
<li>Describe and install OpenShift without an internet connection (disconnected install).</li>
</ul>
<p>As usual, there is some previous knowledge about the technology presented
in this course you should have, among others you should have some:</p>
<ul>
<li>Understanding of networking and concepts such as routing and software-defined networking (SDN)</li>
<li>Understanding of containers and virtualization</li>
<li>Basic understanding of development life cycle and developer workflow</li>
<li>Ability to read and modify code</li>
</ul>
<p>The course is comprised of 8 modules covering:</p>
<ul>
<li>Introduction to Course and Learning Environment.</li>
<li>Learn about the Advanced Deployment with Red Hat OpenShift course.</li>
<li>Understand the prerequisites, training environment, and system designations used during the lab procedures.</li>
<li>Learn tips for successfully completing the labs.</li>
<li>Understand course resources.</li>
</ul>
<h2 id="disconnected-install">Disconnected Install</h2>
<ul>
<li>Learn what a disconnected install is and about the architectures for disconnected environments.</li>
<li>Learn about the advanced installed and the reference configuration implemented with Ansible Playbooks.</li>
<li>Review the software components required for a disconnected install, including Red Hat OpenShift
Container Platform installation software, Red Hat OpenShift Container Platform images, a source code repository, and a development artifact repository.</li>
<li>Learn how to import images from preloaded Docker storage or a local repository.</li>
<li>Perform an installation of Red Hat OpenShift Container Platform, including importing other images
such as Nexus and deploying other infrastructure like the source code repository.</li>
</ul>
<h2 id="openshift-4-installation">OpenShift 4 Installation</h2>
<ul>
<li>Review the many components of the OpenShift architecture.</li>
<li>Use OpenShift installer to deploy an HA OpenShift cluster.</li>
<li>Understand how application HA is achieved with the replication controller and the scheduler.</li>
<li>Learn about container log aggregation and metrics collection.</li>
<li>Use diagnostics tools in server and client environments.</li>
</ul>
<h2 id="machine-management">Machine Management</h2>
<ul>
<li>Review how OpenShift manages underlying infrastructure</li>
<li>Change MachineSet and Machine Configuration</li>
<li>Add nodes by scaling MachineSets</li>
<li>Understand and configure the Cluster Autoscaler</li>
</ul>
<h2 id="networking">Networking</h2>
<ul>
<li>Review networking goals and software-defined networking (SDN).</li>
<li>Review packet-flow scenarios and learn about traffic inside an OpenShift cluster.</li>
<li>Learn about local traffic in a cluster and how OpenShift controls access between different OpenShift namespaces and projects.</li>
<li>Learn how pods connect to external hosts and how IPTables controls access to networks outside the SDN cluster.</li>
<li>Study how pods communicate across a cluster and about traffic inside a cluster.</li>
<li>Learn about pod IP allocation and network isolation.</li>
<li>Configure SDN and set up external access.</li>
<li>Study project network management and setting secure network policies.</li>
<li>Learn about the seven common use cases for NetworkPolicy.</li>
<li>Learn about OpenShift internal DNS.</li>
<li>Learn about external access, including load balancing in SDN and establishing a tunnel in ramp mode.</li>
<li>Understand how OpenShift masters also serve as an internal domain name service (DNS).</li>
</ul>
<h2 id="network-policy">Network Policy</h2>
<ul>
<li>Learn about NetworkPolicy</li>
<li>Configure NetworkPolicy objects in the cluster</li>
<li>Protect a complex application using NetworkPolicy</li>
</ul>
<h2 id="managing-compute-resources">Managing Compute Resources</h2>
<ul>
<li>Learn what compute resources are and about requesting, allocating, and consuming them.</li>
<li>Learn about compute requests, CPU limits, and memory limits.</li>
<li>Learn about quality of service (QoS) tiers.</li>
<li>Create, edit, and delete project resource limits.</li>
<li>Learn how limit ranges enumerate and specify project compute resource constraints for pods, containers, images, and image streams.</li>
<li>Learn about container limits and image limits.</li>
<li>Learn how to use quotas and limit ranges to limit the number of objects or amount of compute resources that are used in a project.</li>
<li>Understand which resources can be managed with quotas.</li>
<li>Learn how BestEffort, NotBestEffort, Terminating, and NotTerminating quota scopes restricts pod resources.</li>
<li>Understand how quotas are enforced and how to set quotas across multiple projects.</li>
<li>Learn about overcommitting CPU and memory and how to configure overcommitting.</li>
</ul>
<h2 id="general-comments">General comments</h2>
<p>I really liked the course, It wasn’t much advanced in general
but it was a very good approach to “the actual thing” people are
doing when deploying the technology for production ready environment.</p>
<p>It’s an operators course IMHO and not a developers course,
I missed to have an actual view about where is the code,
how is organized an how can we contribute with the Kubernetes/OpenShift community.</p>
<p>It has a very nice 8 hours exam xD xD :/ ;( at the very end
as a proof that you understood what you did there,
I think I did it OK but still, I’m waiting for the course test results.</p>
<p>Even though, I tried to dig into the code and understand better the
integration with the python-kubernetes client and gladly created an issue
report with its corresponding PR to fix it :)</p>
<p>This will be now a brief history about what I hacked in the cluster when doing the course.</p>
<h2 id="hacking-the-environment">Hacking the environment</h2>
<p>Let’s start with the research question!</p>
<ul>
<li>How the replication controller manage a massive pods kill?</li>
<li>Did it recover fast?</li>
<li>Did it recover at all?</li>
</ul>
<p>Let’s create a simple web application using 500 pods.</p>
<p><img src="/static/openshift_1.png" alt="" />
<img src="/static/openshift_2.png" alt="" />
<img src="/static/openshift_3.png" alt="" />
<img src="/static/openshift_4.png" alt="" /></p>
<p>I have created a simple script using the Python
Kubernetes client to demonstrate the cluster behavior</p>
<p>The Python script is a simple application using two threads in order to measure
the number of available and unavailable pods for the application at the same time we
kill some pods based in a Poisson distribution.</p>
<p>Let’s see how the cluster behaves when killing the pods
number from the test-webapp namespace</p>
<pre><code>[ccamacho@localhost mocoloco]$ python script.py
</code></pre>
<h2 id="buum">Buum!</h2>
<p>The Python Kubernetes client failed, allowing me to create a GitHub
<a href="https://github.com/kubernetes-client/python-base/issues/139">Issue report</a>
and a
<a href="https://github.com/kubernetes-client/python-base/pull/140">Pull request</a>
with its corresponding fix.</p>
<pre><code>First error within the Kubernetes python client:
test-webapp-1-zv6jr Running 10.129.2.168
Traceback (most recent call last):
File "watch.py", line 75, in <module>
for event in stream:
File "/usr/lib/python2.7/site-packages/kubernetes/watch/watch.py", line 134, in stream
for line in iter_resp_lines(resp):
File "/usr/lib/python2.7/site-packages/kubernetes/watch/watch.py", line 47, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
AttributeError: 'HTTPResponse' object has no attribute 'read_chunked'
</code></pre>
<h2 id="the-execution">The execution</h2>
<p>After hacking around a bit with the client’s methods this is the result</p>
<p><img src="/static/openshift_5.png" alt="" /></p>
<p>It seems that even if the pods creation is fast enough it takes some time to fully recover.</p>
<p>Here is the whole script if you are insterested:</p>
<pre><code>import matplotlib
matplotlib.use('Agg')
import random
import time
from scipy.stats import poisson
import matplotlib.pyplot as plt
import matplotlib.dates as md
from kubernetes import client, config
from kubernetes.client.rest import ApiException
import threading
import datetime
global_available=[]
global_unavailable=[]
global_kill=[]
t1_stop = threading.Event()
t2_stop = threading.Event()
def delete_pod(name, namespace):
core_v1 = client.CoreV1Api()
delete_options = client.V1DeleteOptions()
try:
api_response = core_v1.delete_namespaced_pod(
name=name,
namespace=namespace,
body=delete_options)
except ApiException as e:
print("Exception when calling CoreV1Api->delete_namespaced_pod: %s\n" % e)
def get_pods(namespace=''):
api_instance = client.CoreV1Api()
try:
if namespace == '':
api_response = api_instance.list_pod_for_all_namespaces()
else:
api_response = api_instance.list_namespaced_pod(namespace, field_selector='status.phase=Running')
return api_response
except ApiException as e:
print("Exception when calling CoreV1Api->list_pod_for_all_namespaces: %s\n" % e)
def get_event(namespace, stop):
while not stop.is_set():
config.load_kube_config()
configuration = client.Configuration()
configuration.assert_hostname = False
api_client = client.api_client.ApiClient(configuration=configuration)
dat = datetime.datetime.now()
api_instance = client.AppsV1beta1Api()
api_response = api_instance.read_namespaced_deployment_status('example', namespace)
global_available.append((dat,api_response.status.available_replicas))
global_unavailable.append((dat,api_response.status.unavailable_replicas))
time.sleep(2)
t2_stop.set()
print("Ending live monitor")
def run_histogram(namespace, stop):
# random numbers from poisson distribution
n = 500
a = 0
data_poisson = poisson.rvs(mu=10, size=n, loc=a)
counts, bins, bars = plt.hist(data_poisson)
plt.close()
config.load_kube_config()
configuration = client.Configuration()
configuration.assert_hostname = False
api_client = client.api_client.ApiClient(configuration=configuration)
for experiment in counts:
pod_list = get_pods(namespace=namespace)
aux_li = []
for fil in pod_list.items:
if fil.status.phase == "Running":
aux_li.append(fil)
pod_list = aux_li
# From the Running pods I randomly choose those to die
# based on the histogram length
to_be_killed = random.sample(pod_list, int(experiment))
for pod in to_be_killed:
delete_pod(pod.metadata.name,pod.metadata.namespace)
print("To be killed: "+str(experiment))
global_kill.append((datetime.datetime.now(), int(experiment)))
print(datetime.datetime.now())
print("Ending histogram execution")
time.sleep(300)
t1_stop.set()
def plot_graph():
plt.style.use('classic')
ax=plt.gca()
ax.xaxis_date()
xfmt = md.DateFormatter('%H:%M:%S')
ax.xaxis.set_major_formatter(xfmt)
x_available = [x[0] for x in global_available]
y_available = [x[1] for x in global_available]
plt.plot(x_available,y_available, color='blue')
plt.plot(x_available,y_available, color='blue',marker='o',label='Available pods')
x_unavailable = [x[0] for x in global_unavailable]
y_unavailable = [x[1] for x in global_unavailable]
plt.plot(x_unavailable,y_unavailable, color='magenta')
plt.plot(x_unavailable,y_unavailable, color='magenta',marker='o',label='Unavailable pods')
x_kill = [x[0] for x in global_kill]
y_kill = [x[1] for x in global_kill]
plt.plot(x_kill,y_kill,color='red',marker='o',label='Killed pods')
plt.legend(loc='upper left')
plt.savefig('foo.png', bbox_inches='tight')
plt.close()
if __name__ == "__main__":
namespace = "test-webapp"
try:
t1 = threading.Thread(target=get_event, args=(namespace, t1_stop))
t1.start()
t2 = threading.Thread(target=run_histogram, args=(namespace, t2_stop))
t2.start()
except:
print "Error: unable to start thread"
while not t2_stop.is_set():
pass
print ("Ended all threads")
plot_graph()
</code></pre>
Porsche 993The Porsche 993 is the internal designation for the Porsche 911 model manufactured and sold between January 1994 and early 1998 (model years 1995–1998 in the United States), replacing the 964. Its discontinuation marked the end of air-cooled Porsches. The...2019-06-13T00:00:00+00:00https://www.pubstack.com/blog/2019/06/13/porsche-993Carlos Camacho<p>The Porsche 993 is the internal designation for the
Porsche 911 model manufactured and sold between January
1994 and early 1998 (model years 1995–1998 in the United States),
replacing the 964. Its discontinuation marked the end of air-cooled Porsches.</p>
<p><img src="/static/993_side.jpg" alt="" /></p>
<p>The 993 was much improved over, and quite different from its predecessor.
According to Porsche, every part of the car was designed from the ground up,
including the engine and only 20% of its parts were carried over from the
previous generation. Porsche refers to the 993 as “a significant advance,
not just from a technical, but also a visual perspective.” Porsche’s engineers
devised a new light-alloy subframe with coil and wishbone suspension
(an all new multi-link system), putting behind the previous lift-off oversteer
and making significant progress with the engine and handling, creating a more
civilized car overall providing an improved driving experience. The 993 was
also the first 911 to receive a six speed transmission.</p>
<p><img src="/static/964_side.jpg" alt="" /></p>
<p>The external design of the Porsche 993, penned by English designer Tony Hatter,
retained the basic body shell architecture of the 964 and other earlier 911 models,
but with revised exterior panels, with much more flared wheel arches, a smoother
front and rear bumper design, an enlarged retractable rear wing and teardrop mirrors.
A 993 was promoted globally via its role of the safety car during the 1994 Formula One season.</p>
<p>Next, you can find several resources related to the 993 internals.</p>
<p>Please feel free to submit any pull request to add more resources here if you find them useful.
New files added to the folder <a href="https://github.com/pubstack/pubstack.github.io/tree/master/static/993">/static/993/</a>
will be added automatically to this post after
the commits are merged.</p>
<ul id="files_list"></ul>
<script>
$.getJSON('https://api.github.com/repos/pubstack/pubstack.github.io/contents/static/993/', function(data) {
for (var i = 0; i < data.length; i++) {
var ul = document.getElementById("files_list");
var li = document.createElement("li");
var a = document.createElement('a');
a.setAttribute('href',data[i].download_url);
a.innerHTML = data[i].name;
li.appendChild(a);
ul.appendChild(li);
}
});
</script>
<p>Now some more pictures, and of course if you find this article useful feel free to share it!.</p>
<div class="col">
<h4 class="block-title">Gallery</h4>
<div class="block-body">
<ul class="item-list-round" data-magnific="gallery">
<li><a href="/static/964_front.jpg" style="background-image: url('/static/964_front.jpg');"></a></li>
<li><a href="/static/964_rear.jpg" style="background-image: url('/static/964_rear.jpg');"></a></li>
<li><a href="/static/964_rear2.jpg" style="background-image: url('/static/964_rear2.jpg');"></a></li>
<li><a href="/static/993_inside.jpg" style="background-image: url('/static/993_inside.jpg');"></a></li>
<li><a href="/static/993_inside2.jpg" style="background-image: url('/static/993_inside2.jpg');"></a></li>
<li><a href="/static/964_inside2.jpg" style="background-image: url('/static/964_inside2.jpg');"></a></li>
<li><a href="/static/964_inside.jpg" style="background-image: url('/static/964_inside.jpg');"></a></li>
</ul>
</div>
</div>
<hr style="clear:both; border:0px solid #fff;" />
Kubebox - TrayThe components tray is the enclosure part allowing the user to allocate mother boards, GPUS, FPGAs, Disks arrays. Up to 8 trays in the same enclosure. Frontal view of the generic tray. Lorem ipsum dolor sit amet, consectetur adipiscing elit,...2019-05-22T00:00:00+00:00https://www.pubstack.com/blog/2019/05/22/kubebox-01-trayCarlos Camacho<p>The components tray is the enclosure part allowing the user
to allocate mother boards, GPUS, FPGAs, Disks arrays.</p>
<p>Up to 8 trays in the same enclosure.</p>
<div id="kubebox_tray.glb" style="height: 100%; height:400px; display: flex; align-items: center; justify-content: center;"></div>
<script>
init("static/kubebox/", "kubebox_tray.glb");
</script>
<p>Frontal view of the generic tray.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<div style="float: right; width: 400px; background: white;"><img width="400px" src="/static/kubebox/kubebox_tray_00.png" alt="" style="border:15px solid #FFF" /></div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<div style="float: left; width: 100px; background: white;"><img width="100px" src="/static/kubebox/kubebox_tray_01.png" alt="" style="border:15px solid #FFF" /></div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<div style="float: right; width: 400px; background: white;"><img width="400px" src="/static/kubebox/kubebox_tray_02.png" alt="" style="border:15px solid #FFF" /></div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<div style="float: left; width: 100px; background: white;"><img width="100px" src="/static/kubebox/kubebox_tray_03.png" alt="" style="border:15px solid #FFF" /></div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<div style="float: right; width: 400px; background: white;"><img width="400px" src="/static/kubebox/kubebox_tray_04.png" alt="" style="border:15px solid #FFF" /></div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<div style="float: left; width: 100px; background: white;"><img width="100px" src="/static/kubebox/kubebox_tray_05.png" alt="" style="border:15px solid #FFF" /></div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<div style="float: right; width: 400px; background: white;"><img width="400px" src="/static/kubebox/kubebox_tray_06.png" alt="" style="border:15px solid #FFF" /></div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<p><strong><em>Comments are welcomed as usual, thank you!!!!</em></strong></p>
<h2 id="go-to-the-project-index">Go to the <a href="https://www.pubstack.com/blog/2019/05/21/kubebox.html">project index</a></h2>
<h2 id="update-log">Update log:</h2>
<div style="font-size:10px">
<blockquote>
<p><strong>2019/05/22:</strong> Initial version.</p>
</blockquote>
</div>
The Kubernetes in a box projectImplementing cloud computing solutions that runs in hybrid environments might be the final solution when comes to finding the best benefits/cost ratio. This post will be the main thread to build and describe the KIAB/Kubebox project (www.kubebox.org and/or www.kiab.org). Spoiler...2019-05-21T00:00:00+00:00https://www.pubstack.com/blog/2019/05/21/kubeboxCarlos Camacho<p>Implementing cloud computing solutions that runs in hybrid environments might
be the final solution when comes to finding the best benefits/cost
ratio.</p>
<p>This post will be the main thread to build and describe the KIAB/Kubebox
project (<a href="http://www.kubebox.org">www.kubebox.org</a> and/or <a href="http://www.kiab.org">www.kiab.org</a>).</p>
<p>Spoiler alert!</p>
<p><img src="/static/kubebox/kubebox.jpg" alt="" /></p>
<h1 id="the-name">The name</h1>
<p>First thing first, the name.. I have in my mind two names having the same meaning.
The first one is KIAB (Kubernetes In A Box) this name came to my mind as
the <a href="https://es.wikipedia.org/wiki/Kiai">Kiai</a>
sound from karatekas (practitioners of karate).
The second one is more
traditional, “Kubebox”. I have no preference but
it would be awesome if you help me
to decide the official name for this project.</p>
<p><strong><em>Add a comment and contribute to select the project name!</em></strong></p>
<h1 id="introduction">Introduction</h1>
<p>This project is about to integrate together already market
available devices to run cloud software as an appliance.</p>
<p>The proof-of-concept delivered in this series of posts will allow people
to put a well-known set of hardware devices into a single chassis for
either create their own cloud appliances, research and development,
continuous integration, testing, home labs, staging or production-ready
environments or simply just for fun.</p>
<p>Hereby it’s humbly presented to you the design of KubeBox/KIAB
an open chassis specification for building cloud appliances.</p>
<p>The case enclosure is fully designed, and hopefully in the last phases
for building the first set of enclosures, now, the posts will appear
in the mean time I have some free cycles for writing the overall description.</p>
<h1 id="use-cases">Use cases</h1>
<p>Several use cases can be defined to run on a KubeBox chassis.</p>
<ul>
<li>AWS outpost.</li>
<li>Development environments.</li>
<li>EDGE.</li>
<li>Production Environments for small sites.</li>
<li>GitLab CI integration.</li>
<li>Demos for summits and conferences.</li>
<li>R&D: FPGA usage, deep learning, AI, TensorFlow, among many others.</li>
<li>Marketing WOW effect.</li>
<li>Training.</li>
</ul>
<h1 id="enclosure-design">Enclosure design</h1>
<p>The enclosure is designed as a rackable unit,
using 7U. It tries to minimize the space used to deploy
an up to 8-node cluster with redundancy for both power and networking.</p>
<h1 id="cloud-appliance-description">Cloud appliance description</h1>
<p>This build will be described across several sub-posts
linked from this main thread.
The posts will be created particularly without any specific order
depending on my availability.</p>
<ul>
<li>Backstory and initial parts selection.</li>
<li>Designing the case part 1: Design software.</li>
<li>A brief introduction to CAD software.</li>
<li>Designing the case part 2: U’s, brakes, and ghosts.</li>
<li>Designing the case part 3: Sheet thickness and bend radius.</li>
<li>Designing the case part 4: Parts Allowance (finish, tolerance, and fit).</li>
<li>Designing the case part 5: Vent cutouts and frickin’ laser beams!.</li>
<li>Designing the case part 6: Self-clinching nuts and standoffs.</li>
<li>Designing the case part 7: The standoffs strike back.</li>
<li>A brief primer on screws and PEMSERTs.</li>
<li>Designing the case part 8: Implementing PEMSERTs and screws.</li>
<li>Designing the case part 9: Bend reliefs and flat patterns.</li>
<li><a href="https://www.pubstack.com/blog/2019/05/22/kubebox-01-tray.html">Designing the case part 10: Tray caddy, to be used with GPU, Mother boards, disks, any other peripherals you want to add to the enclosure.</a></li>
<li>Designing the case part 11: Components rig.</li>
<li>Designing the case part 12: Power supply.</li>
<li>Designing the case part 13: Networking.</li>
<li>Designing the case part 14: 3D printed supports.</li>
<li>Designing the case part 15: Adding computing power.</li>
<li>Designing the case part 16: Adding Storage.</li>
<li>Designing the case part 17: Front display and bastion for automation.</li>
<li>Manufacturing the case part 1: PEMSERT installation.</li>
<li>Manufacturing the case part 2: Bending metal.</li>
<li>Manufacturing the case part 3: Bending metal.</li>
<li>KubeBox cloud appliance in detail!.</li>
<li>Manufacturing the case part 0: Getting quotes.</li>
<li>Manufacturing the case part 1: Getting the cases.</li>
<li>Software deployments: Reference architecture.</li>
<li>Design final source files for the enclosure design.</li>
<li>KubeBox is fully functional.</li>
</ul>
<h2 id="update-log">Update log:</h2>
<div style="font-size:10px">
<blockquote>
<p><strong>2019/05/21:</strong> Initial version.</p>
</blockquote>
</div>
Running Relax-and-Recover to save your OpenStack deploymentReaR is a pretty impressive disaster recovery solution for Linux. Relax-and-Recover, creates both a bootable rescue image and a backup of the associated files you choose. When doing disaster recovery of a system, this Rescue Image plays the files back...2019-05-20T00:00:00+00:00https://www.pubstack.com/blog/2019/05/20/relax-and-recover-backupsCarlos Camacho<p>ReaR is a pretty impressive disaster recovery
solution for Linux. Relax-and-Recover, creates both a
bootable rescue image and a backup of the associated files you choose.</p>
<p><img src="/static/ReAR_and_OpenStack.png" alt="" /></p>
<p>When doing disaster recovery of a system, this Rescue Image plays
the files back from the backup and so in the twinkling of
an eye the latest state.</p>
<p>Various configuration options are available for the rescue image.
For example, slim ISO files, USB sticks or even images for PXE
servers are generated. As many backup options are possible.
Starting with a simple archive file (eg * .tar.gz),
various backup technologies such as IBM Tivoli Storage Manager (TSM),
EMC NetWorker (Legato), Bacula or even Bareos can be addressed.</p>
<p>The ReaR written in Bash enables the skilful
distribution of Rescue Image and if necessary archive file via
NFS, CIFS (SMB) or another transport method in the network.
The actual recovery process then takes place via this transport route.</p>
<p>In this specific case, due to the nature of the OpenStack deployment we will
choose those protocols that are allowed by default in the Iptables rules (SSH, SFTP in particular).</p>
<p>But enough with the theory, here’s a practical example of one of many possible configurations.
We will apply this specific use of ReaR to recover
a failed control plane after a critical maintenance task (like an upgrade).</p>
<p><strong>01 - Prepare the Undercloud backup bucket.</strong></p>
<p>We need to prepare the place to store the backups from the Overcloud.
From the Undercloud, check you have enough space to make the backups
and prepare the environment. We will also create a user in the
Undercloud with no shell access to be able to push the backups from the
controllers or the compute nodes.</p>
<pre><code class="language-bash">groupadd backup
mkdir /data
useradd -m -g backup -d /data/backup backup
echo "backup:backup" | chpasswd
chown -R backup:backup /data
chmod -R 755 /data
</code></pre>
<p><strong>02 - Run the backup from the Overcloud nodes.</strong></p>
<p>Let’s install some required packages and run some previous
configuration steps.</p>
<pre><code class="language-bash">#Install packages
sudo yum install rear genisoimage syslinux lftp wget -y
#Make sure you are able to use sshfs to store the ReaR backup
sudo yum install fuse -y
sudo yum groupinstall "Development tools" -y
wget http://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/f/fuse-sshfs-2.10-1.el7.x86_64.rpm
sudo rpm -i fuse-sshfs-2.10-1.el7.x86_64.rpm
sudo mkdir -p /data/backup
sudo sshfs -o allow_other backup@undercloud-0:/data/backup /data/backup
#Use backup password, which is... backup
</code></pre>
<p>Now, let’s configure ReaR config file.</p>
<pre><code class="language-bash">#Configure ReaR
sudo tee -a "/etc/rear/local.conf" > /dev/null <<'EOF'
OUTPUT=ISO
OUTPUT_URL=sftp://backup:backup@undercloud-0/data/backup/
BACKUP=NETFS
BACKUP_URL=sshfs://backup@undercloud-0/data/backup/
BACKUP_PROG_COMPRESS_OPTIONS=( --gzip )
BACKUP_PROG_COMPRESS_SUFFIX=".gz"
BACKUP_PROG_EXCLUDE=( '/tmp/*' '/data/*' )
EOF
</code></pre>
<p>Now run the backup, this should create an ISO image in
the Undercloud node (/data/backup/).</p>
<p><strong>You will be asked for the backup user password</strong></p>
<pre><code class="language-bash">sudo rear -d -v mkbackup
</code></pre>
<p>Now, simulate a failure xD</p>
<pre><code># sudo rm -rf /lib
</code></pre>
<p>After the ISO image is created, we can proceed to
verify we can restore it from the Hypervisor.</p>
<p><strong>03 - Prepare the hypervisor.</strong></p>
<pre><code class="language-bash"># Enable the use of fusefs for the VMs on the hypervisor
setsebool -P virt_use_fusefs 1
# Install some required packages
sudo yum install -y fuse-sshfs
# Mount the Undercloud backup folder to access the images
mkdir -p /data/backup
sudo sshfs -o allow_other root@undercloud-0:/data/backup /data/backup
ls /data/backup/*
</code></pre>
<p><strong>04 - Stop the damaged controller node.</strong></p>
<pre><code class="language-bash">virsh shutdown controller-0
# virsh destroy controller-0
# Wait until is down
watch virsh list --all
# Backup the guest definition
virsh dumpxml controller-0 > controller-0.xml
cp controller-0.xml controller-0.xml.bak
</code></pre>
<p>Now, we need to change the guest definition to boot from the ISO file.</p>
<p>Edit controller-0.xml and update it to boot from the ISO file.</p>
<p>Find the OS section,add the cdrom device and enable the boot menu.</p>
<pre><code><os>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='yes'/>
</os>
</code></pre>
<p>Edit the devices section and add the CDROM.</p>
<pre><code><disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/data/backup/rear-controller-0.iso'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
</code></pre>
<p>Update the guest definition.</p>
<pre><code class="language-bash">virsh define controller-0.xml
</code></pre>
<p>Restart and connect to the guest</p>
<pre><code class="language-bash">virsh start controller-0
virsh console controller-0
</code></pre>
<p>You should be able to see the boot menu to start the recover process, select Recover controller-0 and follow the instructions.</p>
<p><img src="/static/ReAR1.PNG" alt="" /></p>
<p>Now, before proceeding to run the controller restore, it’s possible that
the host undercloud-0 can’t be resolved, just:</p>
<pre><code class="language-bash">echo "192.168.24.1 undercloud-0" >> /etc/hosts
</code></pre>
<p>Having resolved the Undercloud host, just follow the wizard, Relax And Recover :)</p>
<p>You yould see a message like:</p>
<pre><code>Welcome to Relax-and-Recover. Run "rear recover" to restore your system !
RESCUE controller-0:~ # rear recover
</code></pre>
<p><img src="/static/ReAR2.PNG" alt="" /></p>
<p>The image restore should progress quickly.</p>
<p><img src="/static/ReAR3.PNG" alt="" /></p>
<p>Continue to see the restore evolution.</p>
<p><img src="/static/ReAR4.PNG" alt="" /></p>
<p>Now, each time you reboot the node will have the ISO file
as the first boot option so it’s something we need to fix.
In the mean time let’s check if the restore went fine.</p>
<p>Reboot the guest booting from the hard disk.
<img src="/static/ReAR5.PNG" alt="" /></p>
<p>Now we can see that the guest VM started successfully.
<img src="/static/ReAR6.PNG" alt="" /></p>
<p>Now we need to restore the guest to it’s original definition,
so from the Hypervisor we need to restore the <code>controller-0.xml.bak</code>
file we created.</p>
<pre><code class="language-bash">#From the Hypervisor
virsh shutdown controller-0
watch virsh list --all
virsh define controller-0.xml.bak
virsh start controller-0
</code></pre>
<p>Enjoy.</p>
<h2 id="considerations">Considerations:</h2>
<ul>
<li>Space.</li>
<li>Multiple protocols supported but we might then to update firewall rules, that’s why I prefered SFTP.</li>
<li>Network load when moving data.</li>
<li>Shutdown/Starting sequence for HA control plane.</li>
<li>Do we need to backup the data plane?</li>
<li>User workloads should be handled by a third party backup software.</li>
</ul>
<h2 id="update-log">Update log:</h2>
<div style="font-size:10px">
<blockquote>
<p><strong>2019/05/20:</strong> Initial version.</p>
<p><strong>2019/06/18:</strong> Appeared in <a href="https://superuser.openstack.org/articles/tutorial-rear-openstack-deployment/">OpenStack Superuser blog.</a></p>
</blockquote>
</div>
A linux or unix sysadmin in his natural habitatA linux sysadmin in his natural habitat.
Explanation: No comments.
Disclaimer.
2019-02-17T00:00:00+00:00https://www.pubstack.com/blog/2019/02/17/podCarlos Camacho<p>A linux sysadmin in his natural habitat.</p>
<p><img src="/static/pod/2019-02-17-sysadmin.jpg" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
Bye bye old themeLet’s say bye bye and thanks to the old theme I used for the last 3 years. I created this blog, Pubstack, just after joining Red Hat. This with the purpose of logging part of my upstream work within the...2019-02-16T00:00:00+00:00https://www.pubstack.com/blog/2019/02/16/bye-bye-old-themeCarlos Camacho<p>Let’s say bye bye and thanks to the old theme I used for the last 3 years.</p>
<p>I created this blog, <a href="https://www.pubstack.com">Pubstack</a>,
just after joining Red Hat. This with the purpose of
logging part of my upstream work within the <a href="https://www.tripleo.org">TripleO</a>
community.</p>
<p>It evolved in such a way that I’m currently adding other types of posts, like
management, hobbies, software development posts,
other cloud computing technologies, talks, among many other things.</p>
<p>I had a milestone in which after 70 posts, the current theme
do not scale up correctly anymore.</p>
<p>That’s why I welcome the new theme and I thank the
old one for holding for the last years.</p>
<p><img src="/static/pubstack-v1.png" alt="" /></p>
<p>There are a few things to polish in the new site, like adding my
CV and about page. Otherwise, shiiiiip it!!!</p>
<p>Thank you!!!!</p>
TripleO - Deployment configurationsThis post is a summary of the deployments I usually test for deploying TripleO using quickstart. The following steps need to run in the Hypervisor node in order to deploy both the Undercloud and the Overcloud. You need to execute...2019-02-05T00:00:00+00:00https://www.pubstack.com/blog/2019/02/05/tripleo-quickstart-deploymentsCarlos Camacho<p>This post is a summary of the deployments I usually test for deploying TripleO
using quickstart.</p>
<p><img src="/static/dude-just-deploy-it-already.jpg" alt="" /></p>
<p>The following steps need to run in the Hypervisor node
in order to deploy both the Undercloud and the Overcloud.</p>
<p>You need to execute them one after the other, the idea of this recipe is to
have something just for copying/pasting.</p>
<p>Once the last step ends you can/should be able to connect to the
Undercloud VM to start operating your Overcloud deployment.</p>
<p>The usual steps are:</p>
<p><strong>01 - Prepare the hypervisor node.</strong></p>
<p>Now, let’s install some dependencies.
Same Hypervisor node, same <code>root</code> user.</p>
<pre><code class="language-bash"># In this dev. env. /var is only 50GB, so I will create
# a sym link to another location with more capacity.
# It will take easily more tan 50GB deploying a 3+1 overcloud
sudo mkdir -p /home/libvirt/
sudo ln -sf /home/libvirt/ /var/lib/libvirt
# Disable IPv6 lookups
# sudo bash -c "cat >> /etc/sysctl.conf" << EOL
# net.ipv6.conf.all.disable_ipv6 = 1
# net.ipv6.conf.default.disable_ipv6 = 1
# EOL
# sudo sysctl -p
# Enable IPv6 in kernel cmdline
# sed -i s/ipv6.disable=1/ipv6.disable=0/ /etc/default/grub
# grub2-mkconfig -o /boot/grub2/grub.cfg
# reboot
sudo yum groupinstall "Virtualization Host" -y
sudo yum install git lvm2 lvm2-devel -y
sudo yum install libvirt-python python-lxml libvirt -y
</code></pre>
<p><strong>02 - Create the toor user (from the Hypervisor node, as root).</strong></p>
<pre><code class="language-bash">sudo useradd toor
echo "toor:toor" | sudo chpasswd
echo "toor ALL=(root) NOPASSWD:ALL" \
| sudo tee /etc/sudoers.d/toor
sudo chmod 0440 /etc/sudoers.d/toor
sudo su - toor
cd
mkdir .ssh
ssh-keygen -t rsa -N "" -f .ssh/id_rsa
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
cat .ssh/id_rsa.pub | sudo tee -a /root/.ssh/authorized_keys
echo '127.0.0.1 127.0.0.2' | sudo tee -a /etc/hosts
export VIRTHOST=127.0.0.2
ssh root@$VIRTHOST uname -a
</code></pre>
<p>Now, follow as the <code>toor</code> user and prepare the Hypervisor node
for the deployment.</p>
<p><strong>03 - Clone repos and install deps.</strong></p>
<pre><code class="language-bash">git clone \
https://github.com/openstack/tripleo-quickstart
chmod u+x ./tripleo-quickstart/quickstart.sh
bash ./tripleo-quickstart/quickstart.sh \
--install-deps
sudo setenforce 0
</code></pre>
<p>Export some variables used in the deployment command.</p>
<p><strong>04 - Export common variables.</strong></p>
<pre><code class="language-bash">export CONFIG=~/deploy-config.yaml
export VIRTHOST=127.0.0.2
</code></pre>
<p>Now we will create the configuration file used for the deployment,
depending on the file you choose you will deploy different environments.</p>
<p><strong>05 - Click on the environment description to expand the recipe.</strong></p>
<details>
<summary><strong>OpenStack [Containerized & HA] - 1 Controller, 1 Compute</strong></summary>
<pre><code class="language-bash">
cat > $CONFIG << EOF
overcloud_nodes:
- name: control_0
flavor: control
virtualbmc_port: 6230
- name: compute_0
flavor: compute
virtualbmc_port: 6231
node_count: 2
containerized_overcloud: true
delete_docker_cache: true
enable_pacemaker: true
run_tempest: false
extra_args: >-
--libvirt-type qemu
--ntp-server pool.ntp.org
-e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml
EOF
</code></pre>
</details>
<details>
<summary><strong>OpenStack [Containerized & HA] - 3 Controllers, 1 Compute</strong></summary>
<pre><code class="language-bash">
cat > $CONFIG << EOF
overcloud_nodes:
- name: control_0
flavor: control
virtualbmc_port: 6230
- name: control_1
flavor: control
virtualbmc_port: 6231
- name: control_2
flavor: control
virtualbmc_port: 6232
- name: compute_1
flavor: compute
virtualbmc_port: 6233
node_count: 4
containerized_overcloud: true
delete_docker_cache: true
enable_pacemaker: true
run_tempest: false
extra_args: >-
--libvirt-type qemu
--ntp-server pool.ntp.org
--control-scale 3
--compute-scale 1
-e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml
EOF
</code></pre>
</details>
<details>
<summary><strong>OpenShift [Containerized] - 1 Controller, 1 Compute</strong></summary>
<pre><code class="language-bash">
cat > $CONFIG << EOF
# Original from https://github.com/openstack/tripleo-quickstart/blob/master/config/general_config/featureset033.yml
composable_scenario: scenario009-multinode.yaml
deployed_server: true
network_isolation: false
enable_pacemaker: false
overcloud_ipv6: false
containerized_undercloud: true
containerized_overcloud: true
# This enables TLS for the undercloud which will also make haproxy bind to the
# configured public-vip and admin-vip.
undercloud_generate_service_certificate: false
undercloud_enable_validations: false
# This enables the deployment of the overcloud with SSL.
ssl_overcloud: false
# Centos Virt-SIG repo for atomic package
add_repos:
# NOTE(trown) The atomic package from centos-extras does not work for
# us but its version is higher than the one from the virt-sig. Hence,
# using priorities to ensure we get the virt-sig package.
- type: package
pkg_name: yum-plugin-priorities
- type: generic
reponame: quickstart-centos-paas
filename: quickstart-centos-paas.repo
baseurl: https://cbs.centos.org/repos/paas7-openshift-origin311-candidate/x86_64/os/
- type: generic
reponame: quickstart-centos-virt-container
filename: quickstart-centos-virt-container.repo
baseurl: https://cbs.centos.org/repos/virt7-container-common-candidate/x86_64/os/
includepkgs:
- atomic
priority: 1
extra_args: ''
container_args: >-
# If Pike or Queens
#-e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml
# If Ocata, Pike, Queens or Rocky
#-e /home/stack/containers-default-parameters.yaml
# If >= Stein
-e /home/stack/containers-prepare-parameter.yaml
-e /usr/share/openstack-tripleo-heat-templates/openshift.yaml
# NOTE(mandre) use container images mirrored on the dockerhub to take advantage
# of the proxy setup by openstack infra
docker_openshift_etcd_namespace: docker.io/
docker_openshift_cluster_monitoring_namespace: docker.io/tripleomaster
docker_openshift_cluster_monitoring_image: coreos-cluster-monitoring-operator
docker_openshift_configmap_reload_namespace: docker.io/tripleomaster
docker_openshift_configmap_reload_image: coreos-configmap-reload
docker_openshift_prometheus_operator_namespace: docker.io/tripleomaster
docker_openshift_prometheus_operator_image: coreos-prometheus-operator
docker_openshift_prometheus_config_reload_namespace: docker.io/tripleomaster
docker_openshift_prometheus_config_reload_image: coreos-prometheus-config-reloader
docker_openshift_kube_rbac_proxy_namespace: docker.io/tripleomaster
docker_openshift_kube_rbac_proxy_image: coreos-kube-rbac-proxy
docker_openshift_kube_state_metrics_namespace: docker.io/tripleomaster
docker_openshift_kube_state_metrics_image: coreos-kube-state-metrics
deploy_steps_ansible_workflow: true
config_download_args: >-
-e /home/stack/config-download.yaml
--disable-validations
--verbose
composable_roles: true
overcloud_roles:
- name: Controller
CountDefault: 1
tags:
- primary
- controller
networks:
- External
- InternalApi
- Storage
- StorageMgmt
- Tenant
- name: Compute
CountDefault: 0
tags:
- compute
networks:
- External
- InternalApi
- Storage
- StorageMgmt
- Tenant
tempest_config: false
test_ping: false
run_tempest: false
EOF
</code></pre>
</details>
<p><br /></p>
<p>From the Hypervisor, as the <code>toor</code> user
run the deployment command to deploy
both your Undercloud and Overcloud.</p>
<p><strong>06 - Deploy TripleO.</strong></p>
<pre><code class="language-bash">bash ./tripleo-quickstart/quickstart.sh \
--clean \
--release master \
--teardown all \
--tags all \
-e @$CONFIG \
$VIRTHOST
</code></pre>
<div style="font-size:10px">
<blockquote>
<p><strong>Updated 2019/02/05:</strong> Initial version.</p>
<p><strong>Updated 2019/02/05:</strong> TODO: Test the OpenShift deployment.</p>
<p><strong>Updated 2019/02/06:</strong> Added some clarifications about where the commands should run.</p>
</blockquote>
</div>
Remote work management, the never ending story...This will a quick summary of my last 3 years experience working remotely for one of the best and biggest companies investing and developing Open Source software ever. Also, my idea is to keep this post updated with my latest...2019-01-23T00:00:00+00:00https://www.pubstack.com/blog/2019/01/23/remote-work-managementCarlos Camacho<p>This will a quick summary of my last 3 years experience
working remotely for one of the best and biggest companies
investing and developing Open Source software ever.
Also, my idea is to keep this post
updated with my latest experiences and tips for doing
remote work the best possible experience.</p>
<p><img src="/static/remote_01_naked.jpg" alt="" /></p>
<p>The first feeling that comes to my mind is a picture
of a well-known tv series “Naked and Afraid”.
Yes, you will end up naked in the jungle
waiting for guidance, and it’s not your fault
if this help never comes.</p>
<p>Working remotely gives you several nice and not so
nice experiences when executing your functions in your
new remote awesome job.</p>
<h1 id="good">Good:</h1>
<ul>
<li>Working hours flexibility.</li>
<li>Work life balance.</li>
<li>Saving commuting time.</li>
<li>Cooking your own healthy food.</li>
<li>Geographically distributed teams allow to get the best talents across the globe.</li>
<li>And many more …</li>
</ul>
<p>There are also several benefits for the company you work,
so it’s not a benefit only for you, in most cases they
(the company you work for) will optimize how many resources
they need to scale their workforce, saving in offices,
electricity, heating, Internet access, food, furniture,
and many others…</p>
<p>But this is not about good things, this post it’s about identifying
those corner cases in which working remotely might become a daunting task.
And after identifying those cases, we should be able to provide some
countermeasures for avoiding sad, frustrated and burnt employees
(at least for me they worked quite well).</p>
<h1 id="not-so-good">Not so good:</h1>
<h2 id="operation-costs-are-translated-to-the-employee">Operation costs are translated to the employee</h2>
<p><img src="/static/remote_03_time.png" alt="" /></p>
<p>Yes, in some cases you will have to pay for all the expenses generated
from working in your home, like, a good chair, Internet access, electricity,
gas, among others. But this is not generally a win-lose deal, it depends
because you will also save in fuel, food and most important <strong>TIME</strong>.</p>
<h2 id="the-slow-start">The slow start</h2>
<p><img src="/static/remote_03_onboard.jpg" alt="" /></p>
<p>Starting to work remotely is hard, so first thing first,
IMHO you need to do a few things first.</p>
<ul>
<li>Get the context of your role in the team.</li>
<li>Understand your team goals, functions and responsibilities.</li>
<li>Get yourself a development environment as soon as possible.</li>
</ul>
<h2 id="hard-to-effectively-communicate-with-all-your-colleagues">Hard to effectively communicate with ALL your colleagues</h2>
<p><img src="/static/remote_04_com.png" alt="" /></p>
<p>This might create inside the team, a sort of a ghetto feeling
where there are people communicating each other and others who don’t,
in which case it’s bad because the people might feel it’s not part
of the team’s mission.</p>
<p>There are several solutions to this:</p>
<ul>
<li>Have a common and real-time communication channel, like IRC, Hangouts, Slack or any other technology you want to use.</li>
<li>Make all the people in the team speak daily about this “What I’m doing and what is blocking me to achieve my tasks”. No more than 2 minutes per person in a daily stand-up call.</li>
<li>Avoid doing solo-tasks, try to get at least 2 people per task even if it can be decomposed in different sub-tasks make your team work together, using tools like t-mate to do peer programming sessions it’s also a good idea.</li>
</ul>
<h2 id="senior-roles-usually-feel-they-need-to-pay-back-the-freedom-with-more-hours">Senior roles usually feel they need to pay back the freedom with more hours</h2>
<p><img src="/static/remote_04_overwork.jpg" alt="" /></p>
<p>The freedom of doing remote work might be wrongly interpreted for some people,
yes, you have the freedom of doing laundry, get your kids to the school,
or to write this post. But, there is no need to excessively do extra hours,
if you are able to measure the value you are generating each day, you won’t
have the need for burning yourself out with extra hours.</p>
<p>If you don’t have clear tasks or goals you might end up with this
question, <strong>I’m I doing too much, I’m I working less than I should?</strong>
and there you have the error.
If you measure the value you are generating
you won’t have this question, period.</p>
<p>Measure all! Measuring as much as you can give you a better overview
of your current and day-to-day performance, i.e. something really simple might be a
google docs script connecting to the apps you usually
use to generate a report and have a better scope about your own and personal performance.</p>
<hr />
<p>PRODUCTIVITY IS NOT ABOUT TIME, PRODUCTIVITY IS ABOUT TO GENERATING ADDED VALUE (work smart!).</p>
<hr />
<blockquote>
<p>In the meantime, if you have covered all the tasks committed for the sprint
you should not be feeling bad about your performance,
but, what if you don’t have defined those sprint tasks?</p>
</blockquote>
<p>So, the next item will speak about productivity,
innovating at your workplace and doing some planning kung-fu.</p>
<h2 id="productivity-and-innovation-vs-planned-work">Productivity and innovation vs Planned work</h2>
<p>There are people that have an incredibly good performance doing
their work and they just don’t like to plan the daily basis tasks they need
to achieve, this is motivated mostly because they have all the bits
in their minds. But, this is a team, and we are as good as our
lowest performance component in the chain.</p>
<p><img src="/static/remote_02_improve.png" alt="" /></p>
<p>There are several solutions to this:</p>
<ul>
<li>Agree in a way of defining the tasks that need to be achieved in the sprint,
try not to use time as a measure for finishing tasks, instead try with something
more subjective like value or difficulty.</li>
<li>Know what to do at any moment of your time, sometimes, not knowing that
will force you to invest much time on this without having the need to.</li>
<li>Keep all the knowledge in a single place of truth, yes, this is painful
when you don’t have it. Trello, Taiga, GitHub issues, Google docs, Google spreadsheets,
Launchpad, Bugzilla, Storyboard, and so many others… this is the toolset that I’m usually
using on my day-to-day work. Now it evolved to use Jira with some plugins to keep and maintain a single-place-of-truth.</li>
</ul>
<p>The catch, sometimes over commitment makes you not able to innovate on your role,
so, keep a little bit of time for improving your product and yourself.</p>
<div style="font-size:10px">
<blockquote>
<p><strong>Updated 2019/01/23:</strong> First version</p>
</blockquote>
</div>
Vote for the OpenStack Berlin Summit presentations!I pushed some presentations for this year OpenStack summit in Berlin, the
presentations are related to updates, upgrades, backups, failures and restores.
¡¡¡Please vote!!!
TripleO presentation for Updates and Upgrades
TripleO presentation for Backups and Restores
Happy TripleOing!
2018-07-24T00:00:00+00:00https://www.pubstack.com/blog/2018/07/24/openstack-berlin-summit-vote-for-presentationsCarlos Camacho<p>I pushed some presentations for this year OpenStack summit in Berlin, the
presentations are related to updates, upgrades, backups, failures and restores.</p>
<p><img src="/static/OpenStack-Summit-2018.png" alt="" /></p>
<h2 id="please-vote">¡¡¡Please vote!!!</h2>
<ul>
<li><a href="https://www.openstack.org/summit/berlin-2018/vote-for-speakers/#/21961">TripleO presentation for Updates and Upgrades</a></li>
<li><a href="https://www.openstack.org/summit/berlin-2018/vote-for-speakers/#/22101">TripleO presentation for Backups and Restores</a></li>
</ul>
<p>Happy TripleOing!</p>
TripleO deep dive session #13 (Containerized Undercloud)This is the 13th release of the TripleO “Deep Dive” sessions Thanks to Dan Prince & Emilien Macchi for this deep dive session about the next step of the TripleO’s Undercloud evolution. In this session, they will explain in detail...2018-05-31T00:00:00+00:00https://www.pubstack.com/blog/2018/05/31/tripleo-deep-dive-session-13Carlos Camacho<p>This is the 13th release of the <a href="http://www.tripleo.org/">TripleO</a>
“Deep Dive” sessions</p>
<p>Thanks to <a href="https://dprince.github.io/">Dan Prince</a> & <a href="http://my1.fr/blog">Emilien Macchi</a>
for this deep dive session about the next step of the TripleO’s Undercloud evolution.</p>
<p>In this session, they will explain in detail the movement re-architecting the Undercloud
to move towards containers in order to reuse the containerized Overcloud ecosystem.</p>
<p>You can access the <a href="https://docs.google.com/presentation/d/17Sbo0i0o2AhQBSjYH7eUrXKVHJcItklGZUUleFiC0ZU/">presentation</a>
or the
<a href="https://etherpad.openstack.org/p/tripleo-deep-dive-containerized-undercloud">Etherpad</a> notes.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=lv233gPynwk">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/lv233gPynwk" frameborder="0" allowfullscreen=""></iframe>
</div>
<p><br />
<br /></p>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a>
to have access to all available content.</p>
Testing Undercloud backup and restore using AnsibleThis post will introduce you about how to run backups and restores using Ansible in TripleO. Testing the Undercloud backup and restore It is possible to test how the Undercloud backup and restore should be performed using Ansible. The following...2018-05-18T00:00:00+00:00https://www.pubstack.com/blog/2018/05/18/testing-undercloud-backup-and-restore-using-ansibleCarlos Camacho<p>This post will introduce you about
how to run backups and restores using Ansible
in TripleO.</p>
<h1 id="testing-the-undercloud-backup-and-restore">Testing the Undercloud backup and restore</h1>
<p>It is possible to test how the Undercloud
backup and restore should be performed using
Ansible.</p>
<p>The following Ansible playbooks will show
how can be used Ansible to test the
backups execution in a test environment.</p>
<h2 id="creating-the-ansible-playbooks-to-run-the-tasks">Creating the Ansible playbooks to run the tasks</h2>
<p>Create a yaml file called uc-backup.yaml
with the following content:</p>
<pre><code>---
- hosts: localhost
tasks:
- name: Remove any previously created UC backups
shell: |
source ~/stackrc
openstack container delete undercloud-backups --recursive
ignore_errors: True
- name: Create UC backup
shell: |
source ~/stackrc
openstack undercloud backup --add-path /etc/ --add-path /root/
</code></pre>
<p>Create a yaml file called uc-backup-download.yaml
with the following content:</p>
<pre><code>---
- hosts: localhost
tasks:
- name: Make sure the temp folder used for the restore does not exist
become: true
file:
path: "/var/tmp/test_bk_down"
state: absent
- name: Create temp folder to unzip the backup
become: true
file:
path: "/var/tmp/test_bk_down"
state: directory
owner: "stack"
group: "stack"
mode: "0775"
recurse: "yes"
- name: Download the UC backup to a temporary folder (After breaking the UC we won't be able to get it back)
shell: |
source ~/stackrc
cd /var/tmp/test_bk_down
openstack container save undercloud-backups
- name: Unzip the backup
become: true
shell: |
cd /var/tmp/test_bk_down
tar -xvf UC-backup-*.tar
gunzip *.gz
tar -xvf filesystem-*.tar
- name: Make sure stack user can get the backup files
become: true
file:
path: "/var/tmp/test_bk_down"
state: directory
owner: "stack"
group: "stack"
mode: "0775"
recurse: "yes"
</code></pre>
<p>Create a yaml file called uc-destroy.yaml
with the following content:</p>
<pre><code>---
- hosts: localhost
tasks:
- name: Remove mariadb
become: true
yum: pkg=
state=absent
with_items:
- mariadb
- mariadb-server
- name: Remove files
become: true
file:
path: ""
state: absent
with_items:
- /root/.my.cnf
- /var/lib/mysql
</code></pre>
<p>Create a yaml file called uc-restore.yaml
with the following content:</p>
<pre><code>---
- hosts: localhost
tasks:
- name: Install mariadb
become: true
yum: pkg=
state=present
with_items:
- mariadb
- mariadb-server
- name: Restart MariaDB
become: true
service: name=mariadb state=restarted
- name: Restore the backup DB
shell: cat /var/tmp/test_bk_down/all-databases-*.sql | sudo mysql
- name: Restart MariaDB to perms to refresh
become: true
service: name=mariadb state=restarted
- name: Register root password
become: true
shell: cat /var/tmp/test_bk_down/root/.my.cnf | grep -m1 password | cut -d'=' -f2 | tr -d "'"
register: oldpass
- name: Clean root password from MariaDB to reinstall the UC
shell: |
mysqladmin -u root -p password ''
- name: Clean users
become: true
mysql_user: name="" host_all="yes" state="absent"
with_items:
- ceilometer
- glance
- heat
- ironic
- keystone
- neutron
- nova
- mistral
- zaqar
- name: Reinstall the undercloud
shell: |
openstack undercloud install
</code></pre>
<h2 id="running-the-undercloud-backup-and-restore-tasks">Running the Undercloud backup and restore tasks</h2>
<p>To test the UC backup and restore procedure, run from the UC
after creating the Ansible playbooks:</p>
<pre><code> # This playbook will create the UC backup
ansible-playbook uc-backup.yaml
# This playbook will download the UC backup to be used in the restore
ansible-playbook uc-backup-download.yaml
# This playbook will destroy the UC (remove DB server, remove DB files, remove config files)
ansible-playbook uc-destroy.yaml
# This playbook will reinstall the DB server, restore the DB backup, fix permissions and reinstall the UC
ansible-playbook uc-restore.yaml
</code></pre>
<h2 id="checking-the-undercloud-state">Checking the Undercloud state</h2>
<p>After finishing the Undercloud restore playbook the user should be able to execute again
any CLI command like:</p>
<pre><code> source ~/stackrc
openstack stack list
</code></pre>
<p>Source code available in <a href="https://github.com/ccamacho/tripleo-ansible/tree/master/undercloud-backup-restore-check">GitHub</a></p>
Install tmate.io and share your terminal sessionThis is an easy solution for sharing terminal sessions over ssh. Tmate.io is great terminal sharing app, you can think of it as Teamviewer for ssh. To avoid compiling issues and dependencies, we will get the static build directly from...2018-03-13T00:00:00+00:00https://www.pubstack.com/blog/2018/03/13/installing-and-using-tmateCarlos Camacho<p>This is an easy solution for sharing terminal sessions over ssh.
<a href="https://tmate.io">Tmate.io</a> is great terminal sharing app,
you can think of it as Teamviewer for ssh.</p>
<p><img src="/static/tmate.jpg" alt="" /></p>
<p>To avoid compiling issues and dependencies, we will get the
static build directly from GitHub to <code>automagically</code> use it.</p>
<pre><code class="language-bash"># Get files and install
wget https://github.com/tmate-io/tmate/releases/download/2.2.1/tmate-2.2.1-static-linux-amd64.tar.gz
tar -xvzf tmate-2.2.1-static-linux-amd64.tar.gz
sudo mv ./tmate-2.2.1-static-linux-amd64/tmate /usr/bin/
sudo chmod +x /usr/bin/tmate
rm -rf tmate-2.2.1-static-linux-amd64*
# echo "export TERM=xterm" >> .bashrc
#Configure Tmate using ln2 as the default server
sudo tee -a ~/.tmate.conf > /dev/null <<'EOF'
set -g tmate-server-host "ln2.tmate.io"
EOF
</code></pre>
<p>And that is it, enjoy.</p>
<p>Use tmate and share the link…</p>
<h3 id="running-tmate-as-a-daemon">Running tmate as a daemon</h3>
<p>You can run tmate detached, and retrieve
the SSH connection strings as follow:</p>
<pre><code class="language-bash">tmate -S /tmp/tmate.sock new-session -d # Launch tmate in a detached state
tmate -S /tmp/tmate.sock wait tmate-ready # Blocks until the SSH connection is established
tmate -S /tmp/tmate.sock display -p '#{tmate_ssh}' # Prints the SSH connection string
tmate -S /tmp/tmate.sock display -p '#{tmate_ssh_ro}' # Prints the read-only SSH connection string
tmate -S /tmp/tmate.sock display -p '#{tmate_web}' # Prints the web connection string
tmate -S /tmp/tmate.sock display -p '#{tmate_web_ro}' # Prints the read-only web connection string
</code></pre>
<p>Note that it is important to specify a socket
(e.g. /tmp/dev.sock) as tmate uses a random
socket name by default.</p>
<p>You can think of tmate as a reverse ssh tunnel
accessible from anywhere.</p>
<p>Read more directly from <a href="https://tmate.io/">Tmate.io</a>.</p>
My 2nd birthday as a Red HatterThis post will be about to speak about my experience working in TripleO as a Red Hatter for the last 2 years. In my 2nd birthday as a Red Hatter, I have learned about many technologies, really a lot… But...2018-03-01T00:00:00+00:00https://www.pubstack.com/blog/2018/03/01/2nd-birthday-as-a-red-hatterCarlos Camacho<p>This post will be about to speak about my experience working in TripleO as
a Red Hatter for the last 2 years.</p>
<div style="float: left; width: 230px; background: white;"><img width="230px" src="/static/bday.gif" alt="" style="border:15px solid #FFF" /></div>
<p>In my 2nd birthday as a Red Hatter, I have learned about many technologies,
really a lot… But the most intriguing thing is that here you never stop
learning. Not just because you just don’t want to learn new things, instead,
is because of the project’s nature, this project… TripleO…</p>
<div style="float: right; width: 230px; background: white;"><img width="230px" src="/static/tripleo_logo.png" alt="" style="border:15px solid #FFF" /></div>
<p>TripleO (Openstack On Openstack) is a software aimed to deploy OpenStack
services using the same OpenStack ecosystem, this means that we will deploy
a minimal OpenStack instance (Undercloud) and from there, deploy our production
environment (Overcloud)… Yikes! What a mouthful, huh? Put simply, TripleO
is an installer which should make integrators/operators/developers lives
easier, but the reality sometimes is far away from the expectation.</p>
<p>TripleO is capable of doing wonderful things, with a little of patience,
love, and dedication, your hands can be the right hands to deploy complex environments at easy.</p>
<p>One of the cool things being one of the programmers who write TripleO, from now
on TripleOers, is that many of us also use the software regularly. We are writing
code not just because we are told to do it, but because we want to improve it for our own purposes.</p>
<p>Part of the programmers’ motivation momentum have to do with TripleO’s open‐source
nature, so if you code in TripleO you are part of a community.</p>
<div style="float: left; width: 230px; background: white;"><img width="230px" src="/static/community.gif" alt="" style="border:15px solid #FFF" /></div>
<p>Congratulations! As a TripleO user or a TripleOer, you are a part of our community and
it means that you’re joining a diverse group that spans all age ranges, ethnicities,
professional backgrounds, and parts of the globe. We are a passionate bunch of crazy
people, proud of this “little” monster and more than willing to help
others enjoy using it as much as we do.</p>
<p>Getting to know the interface (the templates, Mistral, Heat, Ansible, Docker,
Puppet, Jinja, …) and how all components are tight together, probably is one of
the most daunting aspects of TripleO for newcomers (and not newcomers).
This for sure will raise the blood pressure of some of you who tried using TripleO
in the past, but failed miserably and gave up in frustration when it did not behave
as expected. Yeah.. sometimes that “$h1t” happens…</p>
<p>Although learning TripleO isn’t that easy, the architecture updates,
the decoupling of the role services “compostable roles”, the backup and restore
strategies, the integration of Ansible among many others have made great strides
toward alleviating that frustration, and the improvements continue through to today.</p>
<p>So this is the question…</p>
<p><img src="/static/fast_to.png" alt="" /></p>
<p>Is TripleO meant to be “fast to use” or “fast to learn”?</p>
<p>There is a significant way of describing software products, but we need to know what
our software will be used for… TripleO is designed to work at scale, it might be
easier to deploy manually a few controllers and computes, but what about deploying
100 computes, 3 controllers and 50 cinder nodes, all of them configured to be integrated
and work as one single “cloud”? Buum!.
So there we find the TripleO benefits if we want to make it scale we need to make it fast to use…</p>
<p>This means that we will find several customizations,
hacks, workarounds, to make it work as we need it.</p>
<p>The upside to this approach is that TripleO evolved to be super-ultra-giga
customizable so operators are enabled to produce great environments blazingly fast..</p>
<p>The downside, Jaja, yes.. there is a downside “or several”. As with most things that
are customized, TripleO became somewhat difficult for new people to understand.
Also, it’s incredibly hard to test all the possible deployments, and when a user does
non-standard or not supported customizations, the upgrades are not as intuitive as they need…</p>
<p>This trade‐off is what I mean when I say “fast to use versus fast to learn.”
You can be extremely productive with TripleO after you understand how it thinks “yes, it thinks”.</p>
<p>However, your first few deployments and patches might be arduous. Of course,
alleviating that potential pain is what our work is about. IMHO the pros are more than the
cons and once you find a niche to improve it will be a really nice experience.</p>
<p>Also, we have the TripleO YouTube channel a place to push video tutorials and deep dive sessions
driven by the community for the community.</p>
<p>For the Spanish community we have a 100% translated TripleO UI, go to https://translate.openstack.org
and help us to reach as many languages as possible!!!</p>
<div style="float: left; width: 230px; background: white;"><img width="230px" src="/static/logo.png" alt="" style="border:15px solid #FFF" /></div>
<p>www.pubstack.com was born on July 5th of 2016 (first GitHub commit), yeah is my way of expressing
my gratitude to the community doing some CtrlC + CtrlV recipes to avoid the frustration of working
with TripleO and not having something deployed and easy to be used ASAP.</p>
<p>Pubstack does not have much traffic but it reached superuser.openstack.org, the TripleO cheatsheets
were on devconf.cz and FOSDEM, so in general, is really nice. When people reference your writings
anywhere. Maybe in the future can evolve to be more related to ANsible and openSTACK ;) as TripleO
is adding more and more support for Ansible.</p>
<div style="float: right; width: 230px; background: white;"><img width="230px" src="/static/red_hat.png" alt="" style="border:15px solid #FFF" /></div>
<p>What about Red Hat? Yeahp, I have a long time speaking about the project but haven’t
spoken about the company making it all real.
Red Hat is the world’s leading provider of open source solutions,
using a community-powered approach to provide reliable and high-performing
cloud, virtualization, storage, Linux, and middleware technologies.</p>
<p>There is a strong feeling of belonging in Red Hat, you are part of a team, a culture and you are able to
find a perfect balance between your work and life. Also, having all people from all over the globe makes
a perfect place for sharing ideas and collaborate. Not all of it is good, i.e. Working mostly remotely
in upstream communities can be really hard to manage if you are not 100%
sure about the tasks that need to be done.</p>
<p>Keep rocking and become part of the TripleO community!</p>
TripleO deep dive session #12 (config-download)This is the 12th release of the TripleO “Deep Dive” sessions Thanks to James Slagle for this new session, in which he will describe and speak about a feature called config-download. In this session we will have an update for...2018-02-23T00:00:00+00:00https://www.pubstack.com/blog/2018/02/23/tripleo-deep-dive-session-12Carlos Camacho<p>This is the 12th release of the <a href="http://www.tripleo.org/">TripleO</a> “Deep Dive” sessions</p>
<p>Thanks to <a href="http://blog-slagle.rhcloud.com/">James Slagle</a> for this new session, in which he
will describe and speak about a feature called <code>config-download</code>.</p>
<p>In this session we will have
an update for the TripleO
ansible integration called
<code>config-download</code>.
It’s about aplying all the software
configuration with Ansible instead
of doing it with the Heat agents.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=-6ojHT8P4RE">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/-6ojHT8P4RE" frameborder="0" allowfullscreen=""></iframe>
</div>
<p><br />
<br /></p>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a>
to have access to all available content.</p>
The Fender Stratocaster coffee tableThis post will show briefly a project I did a long time ago to build a Stratocaster coffee table. Here you have the pictures: IKEA countertop to make the table Source code for the table design This link have the...2018-01-08T00:00:00+00:00https://www.pubstack.com/blog/2018/01/08/stratocaster-coffee-tableCarlos Camacho<p>This post will show briefly a project I did a
long time ago to build a Stratocaster coffee table.</p>
<p>Here you have the pictures:</p>
<h1 id="ikea-countertop-to-make-the-table">IKEA countertop to make the table</h1>
<p><img src="/static/strato-coffee-table/01-countertop-raw.jpg" alt="" /></p>
<h1 id="source-code-for-the-table-design">Source code for the table design</h1>
<p><a href="/static/strato-coffee-table/02-table-design.svg">This link</a>
have the design I used to CNC the countertop. It’s a simple
SVG file that can be translated to .gcode without issues.</p>
<h1 id="cnc-machine-view-1">CNC machine view 1</h1>
<p><img src="/static/strato-coffee-table/03-cncing.jpeg" alt="" /></p>
<h1 id="cnc-machine-view-2">CNC machine view 2</h1>
<p><img src="/static/strato-coffee-table/04-cncing.jpeg" alt="" /></p>
<h1 id="cnc-video-building-the-table">CNC video building the table</h1>
<div class="center">
<video width="480" height="320" controls="controls">
<source src="/static/strato-coffee-table/05-cncing.mp4" type="video/mp4" />
</video>
</div>
<h1 id="machined-countertop">Machined countertop</h1>
<p><img src="/static/strato-coffee-table/06-cnced.jpg" alt="" /></p>
<h1 id="applying-some-epoxy-resin-to-fill-the-holes">Applying some epoxy resin to fill the holes</h1>
<p><img src="/static/strato-coffee-table/07-epoxyed.jpg" alt="" /></p>
<h1 id="table-after-first-round-of-sanding">Table after first round of sanding</h1>
<p><img src="/static/strato-coffee-table/08-partially-sanded.jpeg" alt="" /></p>
<h1 id="table-sanded">Table sanded</h1>
<p><img src="/static/strato-coffee-table/09-partially-sanded.jpeg" alt="" /></p>
<p>Hope you enjoyed reading this.</p>
New TripleO quickstart cheatsheetI have created some cheatsheets for people starting to work on TripleO, mostly to help them to bootstrap a development environment as quickly as possible. The previous version of this cheatsheet series was used in several community conferences (FOSDEM, DevConf.cz),...2018-01-05T00:00:00+00:00https://www.pubstack.com/blog/2018/01/05/tripleo-quickstart-cheatsheetCarlos Camacho<p>I have created some cheatsheets for people starting to work on TripleO,
mostly to help them to bootstrap a development environment as quickly as possible.</p>
<p><a href="https://github.com/ccamacho/tripleo-graphics/tree/master/cheatsheets/old_style">The previous version</a>
of this cheatsheet series was used in
several community conferences (FOSDEM, DevConf.cz),
now, they are deprecated as
the way TripleO should be deployed changed considerably last months.</p>
<p>Here you have the latest version:</p>
<p><img src="/static/01-tripleo-cheatsheet-deploying-tripleo_p1.jpg" alt="" /></p>
<p><img src="/static/01-tripleo-cheatsheet-deploying-tripleo_p2.jpg" alt="" /></p>
<p>The source code of these bookmarks is available as usual on
<a href="https://github.com/ccamacho/tripleo-graphics/tree/master/cheatsheets/latest_style">GitHub</a></p>
<p>And this is the code if you want to execute it directly:</p>
<pre><code># 01 - Create the toor user.
sudo useradd toor
echo "toor:toor" | chpasswd
echo "toor ALL=(root) NOPASSWD:ALL" \
| sudo tee /etc/sudoers.d/toor
sudo chmod 0440 /etc/sudoers.d/toor
su - toor
# 02 - Prepare the hypervisor node.
cd
mkdir .ssh
ssh-keygen -t rsa -N "" -f .ssh/id_rsa
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
cat .ssh/id_rsa.pub | sudo tee -a /root/.ssh/authorized_keys
echo '127.0.0.1 127.0.0.2' | sudo tee -a /etc/hosts
export VIRTHOST=127.0.0.2
sudo yum groupinstall "Virtualization Host" -y
sudo yum install git lvm2 lvm2-devel -y
ssh root@$VIRTHOST uname -a
# 03 - Clone repos and install deps.
git clone \
https://github.com/openstack/tripleo-quickstart
chmod u+x ./tripleo-quickstart/quickstart.sh
bash ./tripleo-quickstart/quickstart.sh \
--install-deps
sudo setenforce 0
# 04 - Configure the TripleO deployment with Docker and HA.
export CONFIG=~/deploy-config.yaml
cat > $CONFIG << EOF
overcloud_nodes:
- name: control_0
flavor: control
virtualbmc_port: 6230
- name: compute_0
flavor: compute
virtualbmc_port: 6231
node_count: 2
containerized_overcloud: true
delete_docker_cache: true
enable_pacemaker: true
run_tempest: false
extra_args: >-
--libvirt-type qemu
--ntp-server pool.ntp.org
-e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml
EOF
# 05 - Deploy TripleO.
export VIRTHOST=127.0.0.2
bash ./tripleo-quickstart/quickstart.sh \
--clean \
--release master \
--teardown all \
--tags all \
-e @$CONFIG \
$VIRTHOST
</code></pre>
<p>Happy TripleOing!!!</p>
<h2 id="update-log">Update log:</h2>
<div style="font-size:10px">
<blockquote>
<p><strong>2018/01/05:</strong> Initial version.</p>
<p><strong>2019/01/16:</strong> Appeared in <a href="https://superuser.openstack.org/articles/new-tripleo-quick-start-cheatsheet/">OpenStack Superuser blog.</a></p>
</blockquote>
</div>
Automating Undercloud backups and a Mistral introduction for creating workbooks, workflows and actionsThe goal of this developer documentation is to address the automated process of backing up a TripleO Undercloud and to give developers a complete description about how to integrate Mistral workbooks, workflows and actions into the Python TripleO client. This...2017-12-18T00:00:00+00:00https://www.pubstack.com/blog/2017/12/18/automating-the-undercloud-backup-and-mistral-workflows-introCarlos Camacho<p>The goal of this developer documentation is to address the automated process
of backing up a TripleO Undercloud and to give developers a complete description
about how to integrate Mistral workbooks, workflows and actions into the Python
TripleO client.</p>
<p>This tutorial will be divided into several sections:</p>
<ol>
<li>Introduction and prerequisites</li>
<li>Undercloud backups</li>
<li>Creating a new OpenStack CLI command in python-tripleoclient (openstack
undercloud backup).</li>
<li>Creating Mistral workflows for the new python-tripleoclient CLI command.</li>
<li>Give support for new Mistral environment variables when installing the
undercloud.</li>
<li>Show how to test locally the changes in python-tripleoclient and
tripleo-common.</li>
<li>Give elevated privileges to specific Mistral actions that need to run with
elevated privileges.</li>
<li>Debugging actions</li>
<li>Unit tests</li>
<li>Why all previous sections are related to Upgrades?</li>
</ol>
<h2 id="1-introduction-and-prerequisites">1. Introduction and prerequisites</h2>
<p>Let’s assume you have a TripleO development environment healthy and working
properly. All the commands and customization we are going to run will run in
the Undercloud, as usual logged in as the stack user and having sourced the
stackrc file.</p>
<p>Then let’s proceed by cloning the repositories we are going to work with in a
temporary folder:</p>
<pre><code>mkdir dev-docs
cd dev-docs
git clone https://github.com/openstack/python-tripleoclient
git clone https://github.com/openstack/tripleo-common
git clone https://github.com/openstack/instack-undercloud
</code></pre>
<ul>
<li><strong>python-tripleoclient:</strong> Will define the OpenStack CLI commands.</li>
<li><strong>tripleo-common:</strong> Will have the Mistral logic.</li>
<li><strong>instack-undercloud:</strong> Allows to update and create mistral
environments to store configuration details needed when executing Mistral workflows.</li>
</ul>
<h2 id="2-undercloud-backups">2. Undercloud backups</h2>
<p>Most of the Undercloud back procedure is available in the
<a href="https://docs.openstack.org/tripleo-docs/latest/install/post_deployment/backup_restore_undercloud.html">TripleO official documentation site</a>.</p>
<p>We will focus on the automation of backing up the resources required to restore
the Undercloud in case of a failed upgrade.</p>
<ul>
<li>All MariaDB databases on the undercloud node</li>
<li>MariaDB configuration file on undercloud (so we can restore databases
accurately)</li>
<li>All glance image data in /var/lib/glance/images</li>
<li>All swift data in /srv/node</li>
<li>All data in stack users home directory</li>
</ul>
<p>For doing this we need to be able to:</p>
<ul>
<li>Connect to the database server as root.</li>
<li>Dump all databases to file.</li>
<li>Create a filesystem backup of several folders (and be able to access folders
with restricted access).</li>
<li>Upload this backup to a swift container to be able to get it from the TripleO
web UI.</li>
</ul>
<h2 id="3-creating-a-new-openstack-cli-command-in-python-tripleoclient-openstack-undercloud-backup">3. Creating a new OpenStack CLI command in python-tripleoclient (openstack undercloud backup).</h2>
<p>The first action needed is to be able to create a new CLI command for the
OpenStack client. In this case, we are going to implement the <strong>openstack
undercloud backup</strong> command.</p>
<pre><code>cd dev-docs
cd python-tripleoclient
</code></pre>
<p>Let’s list the files inside this folder:</p>
<pre><code>[stack@undercloud python-tripleoclient]$ ls
AUTHORS doc setup.py
babel.cfg LICENSE test-requirements.txt
bindep.txt zuul.d tools
build README.rst tox.ini
ChangeLog releasenotes tripleoclient
config-generator requirements.txt
CONTRIBUTING.rst setup.cfg
</code></pre>
<p>Once inside the <strong>python-tripleoclient</strong> folder we need to check the following
file:</p>
<p><strong>setup.cfg:</strong> This file defines all the CLI commands for the Python TripleO
client. Specifically, we will need at the end of this file our new command
definition:</p>
<pre><code>undercloud_backup = tripleoclient.v1.undercloud_backup:BackupUndercloud
</code></pre>
<p>This means that we have a new command defined as <strong>undercloud backup</strong> that
will instantiate the <strong>BackupUndercloud</strong> class defined in the file
<strong>tripleoclient/v1/undercloud_backup.py</strong></p>
<p>For further details related to this class definition please go to the
<a href="https://review.openstack.org/#/c/466213">gerrit review</a>.</p>
<p>Now, having our class defined we can call other methods to invoke Mistral in
this way:</p>
<pre><code>clients = self.app.client_manager
files_to_backup = ','.join(list(set(parsed_args.add_files_to_backup)))
workflow_input = {
"sources_path": files_to_backup
}
output = undercloud_backup.prepare(clients, workflow_input)
</code></pre>
<p>So forth, we will call the <strong>undercloud_backup.prepare</strong> method defined
in the file <strong>tripleoclient/workflows/undercloud_backup.py</strong> wich will
call the Mistral workflow:</p>
<pre><code>def prepare(clients, workflow_input):
workflow_client = clients.workflow_engine
tripleoclients = clients.tripleoclient
with tripleoclients.messaging_websocket() as ws:
execution = base.start_workflow(
workflow_client,
'tripleo.undercloud_backup.v1.prepare_environment',
workflow_input=workflow_input
)
for payload in base.wait_for_messages(workflow_client, ws, execution):
if 'message' in payload:
return payload['message']
</code></pre>
<p>In this case, we will create a loop within the tripleoclient and wait until we receive
a message from the Mistral workflow <strong>tripleo.undercloud_backup.v1.prepare_environment</strong>
that indicates if the invoked workflow ended correctly.</p>
<h2 id="4-creating-mistral-workflows-for-the-new-python-tripleoclient-cli-command">4. Creating Mistral workflows for the new python-tripleoclient CLI command.</h2>
<p>The next step is to define the
<strong>tripleo.undercloud_backup.v1.prepare_environment</strong> Mistral workflow, all the
Mistral workbooks, workflows and actions will be defined in the
<strong>tripleo-common</strong> repository.</p>
<p>Let’s go inside <strong>tripleo-common</strong></p>
<pre><code>cd dev-docs
cd tripleo-common
</code></pre>
<p>And see it’s conent:</p>
<pre><code>[stack@undercloud tripleo-common]$ ls
AUTHORS doc README.rst test-requirements.txt
babel.cfg HACKING.rst releasenotes tools
build healthcheck requirements.txt tox.ini
ChangeLog heat_docker_agent scripts tripleo_common
container-images image-yaml setup.cfg undercloud_heat_plugins
contrib LICENSE setup.py workbooks
CONTRIBUTING.rst playbooks sudoers zuul.d
</code></pre>
<p>Again we need to check the following file:</p>
<p>setup.cfg: This file defines all the Mistral actions we can call.
Specifically, we will need at the end of this file our new actions:</p>
<pre><code>tripleo.undercloud.get_free_space = tripleo_common.actions.undercloud:GetFreeSpace
tripleo.undercloud.create_backup_dir = tripleo_common.actions.undercloud:CreateBackupDir
tripleo.undercloud.create_database_backup = tripleo_common.actions.undercloud:CreateDatabaseBackup
tripleo.undercloud.create_file_system_backup = tripleo_common.actions.undercloud:CreateFileSystemBackup
tripleo.undercloud.upload_backup_to_swift = tripleo_common.actions.undercloud:UploadUndercloudBackupToSwift
</code></pre>
<h3 id="41-action-definition">4.1. Action definition</h3>
<p>Let’s take the first action to describe it’s definition,
<strong>tripleo.undercloud.get_free_space = tripleo_common.actions.undercloud:GetFreeSpace</strong></p>
<p>We have defined the action named as <strong>tripleo.undercloud.get_free_space</strong> which
will instantiate the class <strong>GetFreeSpace</strong> defined in the file
<strong>tripleo_common/actions/undercloud.py</strong> file.</p>
<p>If we open <strong>tripleo_common/actions/undercloud.py</strong> we can see the class definition as:</p>
<pre><code>class GetFreeSpace(base.Action):
"""Get the Undercloud free space for the backup.
The default path to check will be /tmp and the default minimum size will
be 10240 MB (10GB).
"""
def __init__(self, min_space=10240):
self.min_space = min_space
def run(self, context):
temp_path = tempfile.gettempdir()
min_space = self.min_space
while not os.path.isdir(temp_path):
head, tail = os.path.split(temp_path)
temp_path = head
available_space = (
(os.statvfs(temp_path).f_frsize * os.statvfs(temp_path).f_bavail) /
(1024 * 1024))
if (available_space < min_space):
msg = "There is no enough space, avail. - %s MB" \
% str(available_space)
return actions.Result(error={'msg': msg})
else:
msg = "There is enough space, avail. - %s MB" \
% str(available_space)
return actions.Result(data={'msg': msg})
</code></pre>
<p>In this specific case this class will check if there is enough space to perform
the backup. Later we will be able to inkove action as</p>
<pre><code>mistral run-action tripleo.undercloud.get_free_space
</code></pre>
<p>or use it workbooks.</p>
<h3 id="42-workflow-definition">4.2. Workflow definition.</h3>
<p>Once we have defined all our new actions, we need to orchestrate them in order
to have a fully working Mistral workflow.</p>
<p>All <strong>tripleo-comon</strong> workbooks are defined in the workbooks folder.</p>
<p>In the next example we have a workbook definition
with all actions inside it, in this case we put in the example
the first workflow with all the tasks involved.</p>
<pre><code>---
version: '2.0'
name: tripleo.undercloud_backup.v1
description: TripleO Undercloud backup workflows
workflows:
prepare_environment:
description: >
This workflow will prepare the Undercloud to run the database backup
tags:
- tripleo-common-managed
input:
- queue_name: tripleo
tasks:
# Action to know if there is enough available space
# to run the Undercloud backup
get_free_space:
action: tripleo.undercloud.get_free_space
publish:
status: <% task().result %>
free_space: <% task().result %>
on-success: send_message
on-error: send_message
publish-on-error:
status: FAILED
message: <% task().result %>
# Sending a message that the folder to create the backup was
# created succesfully
send_message:
action: zaqar.queue_post
retry: count=5 delay=1
input:
queue_name: <% $.queue_name %>
messages:
body:
type: tripleo.undercloud_backup.v1.launch
payload:
status: <% $.status %>
execution: <% execution() %>
message: <% $.get('message', '') %>
on-success:
- fail: <% $.get('status') = "FAILED" %>
</code></pre>
<p>The workflow its self explanatory, the only not so clear part might be the last
one as the workflow uses an action to send a message stating that the workflow
ended correctly. Passing as the message the output of the previous task, in
this case the result of the <strong>create_backup_dir</strong>.</p>
<h2 id="5-give-support-for-new-mistral-environment-variables-when-installing-the-undercloud">5. Give support for new Mistral environment variables when installing the undercloud.</h2>
<p>Sometimes is needed to use additional values inside a Mistral task. For example,
if we need to create a dump of a database we might need another that the
Mistral user credentials for authentication purposes.</p>
<p>Initially when the Undercloud is installed it’s
created a Mistral environment called
<strong>tripleo.undercloud-config</strong>.
This environment variable will have all required configuration details that we
can get from Mistral. This is defined in the <strong>instack-undercloud</strong> repository.</p>
<p>Let’s get into the repository and check the content of the file
<strong>instack_undercloud/undercloud.py</strong>.</p>
<p>This file defines a set of methods to interact with the Undercloud,
specifically the method called <strong>_create_mistral_config_environment</strong> allows to
configure additional environment variables when installing the Undercloud.</p>
<p>For additional testing, you can use the
<a href="https://gist.github.com/ccamacho/354f798102710d165c1f6167eb533caf#file-mistral_client_snippet-py">Python snippet</a>
to call Mistral client from the Undercloud node
available in gist.github.com.</p>
<h2 id="6-show-how-to-test-locally-the-changes-in-python-tripleoclient-and-tripleo-common">6. Show how to test locally the changes in python-tripleoclient and tripleo-common.</h2>
<p>If it’s needed a local test of a change in python-tripleoclient or
tripleo-common, the following procedures allow to test it locally.</p>
<p>For a change in <strong>python-tripleoclient</strong>, assuming you already have downloaded
the change you want to test, execute:</p>
<pre><code>cd python-tripleoclient
sudo rm -Rf /usr/lib/python2.7/site-packages/tripleoclient*
sudo rm -Rf /usr/lib/python2.7/site-packages/python_tripleoclient*
sudo python setup.py clean --all install
</code></pre>
<p>For a change in <strong>tripleo-common</strong>, assuming you already have downloaded the
change you want to test, execute:</p>
<pre><code>cd tripleo-common
sudo rm -Rf /usr/lib/python2.7/site-packages/tripleo_common*
sudo python setup.py clean --all install
sudo cp /usr/share/tripleo-common/sudoers /etc/sudoers.d/tripleo-common
# this loads the actions via entrypoints
sudo mistral-db-manage --config-file /etc/mistral/mistral.conf populate
# make sure the new actions got loaded
mistral action-list | grep tripleo
for workbook in workbooks/*.yaml; do
mistral workbook-create $workbook
done
for workbook in workbooks/*.yaml; do
mistral workbook-update $workbook
done
sudo systemctl restart openstack-mistral-executor
sudo systemctl restart openstack-mistral-engine
</code></pre>
<p>If we want to execute a Mistral action or a Mistral workflow you can execute:</p>
<p>Examples about how to test Mistral actions independently:</p>
<pre><code>mistral run-action tripleo.undercloud.get_free_space #Without parameters
mistral run-action tripleo.undercloud.get_free_space '{"path": "/etc/"}' # With parameters
mistral run-action tripleo.undercloud.create_file_system_backup '{"sources_path": "/tmp/asdf.txt,/tmp/asdf", "destination_path": "/tmp/"}'
</code></pre>
<p>Examples about how to test a Mistral workflow independently:</p>
<pre><code>mistral execution-create tripleo.undercloud_backup.v1.prepare_environment # No parameters
mistral execution-create tripleo.undercloud_backup.v1.filesystem_backup '{"sources_path": "/tmp/asdf.txt,/tmp/asdf", "destination_path": "/tmp/"}' # With parameters
</code></pre>
<h2 id="7-give-elevated-privileges-to-specific-mistral-actions-that-need-to-run-with-elevated-privileges">7. Give elevated privileges to specific Mistral actions that need to run with elevated privileges.</h2>
<p>Sometimes its is not possible to execute some restricted actions from the
Mistral user, for example, when creating the Undercloud backup we won’t be able
to access the <strong>/home/stack/</strong> folder to create a tarball of it. For this
cases it’s possible to execute elevates actions from the Mistral user:</p>
<p>This is the content of the <strong>sudoers</strong> in the root of the <strong>tripleo-common</strong>
repository at the time of the creatino of this guide.</p>
<pre><code>Defaults!/usr/bin/run-validation !requiretty
Defaults:validations !requiretty
Defaults:mistral !requiretty
mistral ALL = (validations) NOPASSWD:SETENV: /usr/bin/run-validation
mistral ALL = NOPASSWD: /usr/bin/chown -h validations\: /tmp/validations_identity_[A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_], \
/usr/bin/chown validations\: /tmp/validations_identity_[A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_], \
!/usr/bin/chown /tmp/validations_identity_* *, !/usr/bin/chown /tmp/validations_identity_*..*
mistral ALL = NOPASSWD: /usr/bin/rm -f /tmp/validations_identity_[A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_], \
!/usr/bin/rm /tmp/validations_identity_* *, !/usr/bin/rm /tmp/validations_identity_*..*
mistral ALL = NOPASSWD: /bin/nova-manage cell_v2 discover_hosts *
mistral ALL = NOPASSWD: /usr/bin/tar --ignore-failed-read -C / -cf /tmp/undercloud-backup-*.tar *
mistral ALL = NOPASSWD: /usr/bin/chown mistral. /tmp/undercloud-backup-*/filesystem-*.tar
validations ALL = NOPASSWD: ALL
</code></pre>
<p>Here you can grant permissions for specific tasks in when executing Mistral
workflows from <strong>tripleo-common</strong></p>
<h2 id="7-debugging-actions">7. Debugging actions.</h2>
<p>Let’s assume the action is written, added to setup.cfg but not appeared.
Firstly, check if action was added by <code>sudo mistral-db-manage populate</code>. Run</p>
<pre><code>mistral action-list -f value -c Name | grep -e '^tripleo.undercloud'
</code></pre>
<p>If you don’t see your actions check output of <code>sudo mistral-db-manage populate</code> as</p>
<pre><code>sudo mistral-db-manage populate 2>&1| grep ERROR | less
</code></pre>
<p>The following output may indicate issues in code. Simply fix code.</p>
<pre><code>2018-01-01:00:59.730 7218 ERROR stevedore.extension [-] Could not load 'tripleo.undercloud.get_free_space': unexpected indent (undercloud.py, line 40): File "/usr/lib/python2.7/site-packages/tripleo_common/actions/undercloud.py", line 40
</code></pre>
<p>Execute single action, execute workflow from workbook to make sure it works as
designed.</p>
<h2 id="8-unit-tests">8. Unit tests</h2>
<p>Writing Unit test is essential instrument of Software Developer. Unit tests are
much faster that running Workflow itself. So, let’s write unit tests for written
action. Let’s add <strong>tripleo_common/tests/actions/test_undercloud.py</strong> file
with the following content in <strong>tripleo-comon</strong> repositiry.</p>
<pre><code>import mock
from tripleo_common.actions import undercloud
from tripleo_common.tests import base
class GetFreeSpaceTest(base.TestCase):
def setUp(self):
super(GetFreeSpaceTest, self).setUp()
self.temp_dir = "/tmp"
@mock.patch('tempfile.gettempdir')
@mock.patch("os.path.isdir")
@mock.patch("os.statvfs")
def test_run_false(self, mock_statvfs, mock_isdir, mock_gettempdir):
mock_gettempdir.return_value = self.temp_dir
mock_isdir.return_value = True
mock_statvfs.return_value = mock.MagicMock(
spec_set=['f_frsize', 'f_bavail'],
f_frsize=4096, f_bavail=1024)
action = undercloud.GetFreeSpace()
action_result = action.run(context={})
mock_gettempdir.assert_called()
mock_isdir.assert_called()
mock_statvfs.assert_called()
self.assertEqual("There is no enough space, avail. - 4 MB",
action_result.error['msg'])
@mock.patch('tempfile.gettempdir')
@mock.patch("os.path.isdir")
@mock.patch("os.statvfs")
def test_run_true(self, mock_statvfs, mock_isdir, mock_gettempdir):
mock_gettempdir.return_value = self.temp_dir
mock_isdir.return_value = True
mock_statvfs.return_value = mock.MagicMock(
spec_set=['f_frsize', 'f_bavail'],
f_frsize=4096, f_bavail=10240000)
action = undercloud.GetFreeSpace()
action_result = action.run(context={})
mock_gettempdir.assert_called()
mock_isdir.assert_called()
mock_statvfs.assert_called()
self.assertEqual("There is enough space, avail. - 40000 MB",
action_result.data['msg'])
</code></pre>
<p>Run</p>
<pre><code>tox -epy27
</code></pre>
<p>to see any unit tests errors.</p>
<h2 id="8-why-all-previous-sections-are-related-to-upgrades">8. Why all previous sections are related to Upgrades?</h2>
<ul>
<li>Undercloud backups are an important step before runing an Upgrade.</li>
<li>Writing developer docs will help people to create and develope new features.</li>
</ul>
<h2 id="9-references">9. References</h2>
<ul>
<li>http://www.dougalmatthews.com/2016/Sep/21/debugging-mistral-in-tripleo/</li>
<li>http://blog.johnlikesopenstack.com/2017/06/accessing-mistral-environment-in-cli.html</li>
<li>http://hardysteven.blogspot.com.es/2017/03/developing-mistral-workflows-for-tripleo.html</li>
</ul>
Restarting your TripleO hypervisor will break cinder volume service thus the overcloud pingtestI don’t usualy restart my hypervisor, today I had to install LVM2 and virsh stopped to work so a restart was required, once the VMs were up and running the overcloud pingtest failed as cinder was not able to start....2017-10-30T00:00:00+00:00https://www.pubstack.com/blog/2017/10/30/restarting-your-tripleo-hypervisor-will-break-cinderCarlos Camacho<p>I don’t usualy restart my hypervisor, today I had to install LVM2 and
virsh stopped to work so a restart was required, once the VMs were
up and running the overcloud pingtest failed as cinder was not able to start.</p>
<p>From your Overcloud controller run:</p>
<pre><code>sudo losetup -f /var/lib/cinder/cinder-volumes
sudo vgdisplay
sudo service openstack-cinder-volume restart
</code></pre>
<p>This will make your Overcloud pingtest work again.</p>
Create a TripleO snapshot before breaking it...The idea of this post is to show how developers can save some time creating snapshots of their development environments for not deploying it each time it breaks. So, don’t waste time re-deploying your environment when testing submissions. I’ll show...2017-07-14T00:00:00+00:00https://www.pubstack.com/blog/2017/07/14/snapshots-for-your-tripleo-vmsCarlos Camacho<p>The idea of this post is to show how developers can save some time
creating snapshots of their development environments for not
deploying it each time it breaks.</p>
<p>So, don’t waste time re-deploying your environment when testing submissions.</p>
<p>I’ll show here how to be a little more agile when
deploying your Undercloud/Overcloud for testing purposes.</p>
<p>Deploying a fully working development environment takes
around 3 hours with human supervision…
And breaking it just after deployed is not cool at all…</p>
<h1 id="step-1">Step 1</h1>
<p>Deploy your environment as usual.</p>
<h1 id="step-2">Step 2</h1>
<p>Create your Undercloud/Overcloud snapshots.
<strong>Do this as the stack user, otherwise
virsh won’t see the VMs</strong></p>
<pre><code># The VMs deployed are:
# $vms will have something like ne next line...
# vms=( "undercloud" "control_0" "compute_0" )
vms=( $(virsh list --all | grep running | awk '{print $2}') )
# List all VMs
virsh list --all
# List current snapshots
for i in "${vms[@]}"; \
do \
virsh snapshot-list --domain "$i"; \
done
# Dump VMs XLM and check for qemu
for i in "${vms[@]}"; \
do \
virsh dumpxml "$i" | grep -i qemu; \
done
# Create an initial snapshot for each VM
for i in "${vms[@]}"; \
do \
echo "virsh snapshot-create-as --domain $i --name $i-fresh-install --description $i-fresh-install --atomic"; \
virsh snapshot-create-as --domain "$i" --name "$i"-fresh-install --description "$i"-fresh-install --atomic; \
done
# List current snapshots (After they should be already created)
for i in "${vms[@]}"; \
do \
virsh snapshot-list --domain "$i"; \
done
#########################################################################################################
# Current libvirt version does not support live snapshots.
# error: Operation not supported: live disk snapshot not supported with this QEMU binary
# --disk-only and --live not yet available.
# Create the folder for the images
# cd
# mkdir ~/backup_images
# for i in "${vms[@]}"; \
# do \
# echo "<domainsnapshot>" > $i.xml; \
# echo " <memory snapshot='external' file='/home/stack/backup_images/$i.mem.snap2'/>" >> $i.xml; \
# echo " <disks>" >> $i.xml; \
# echo " <disk name='vda'>" >> $i.xml; \
# echo " <source file='/home/stack/backup_images/$i.disk.snap2'/>" >> $i.xml; \
# echo " </disk>" >> $i.xml; \
# echo " </disks>" >> $i.xml; \
# echo "</domainsnapshot>" >> $i.xml; \
# done
# for i in "${vms[@]}"; \
# do \
# echo "virsh snapshot-create $i --xmlfile ~/$i.xml --atomic"; \
# virsh snapshot-create $i --xmlfile ~/$i.xml --atomic; \
# done
</code></pre>
<h1 id="step-3">Step 3</h1>
<p>Break your environment xD</p>
<h1 id="step-4">Step 4</h1>
<p>Restore your snapshots</p>
<pre><code># Commented for safety reasons...
# i=compute_0
i=blehblehbleh
virsh list --all
virsh shutdown $i
sleep 120
virsh list --all
virsh snapshot-revert --domain $i --snapshotname $i-fresh-install --running
virsh list --all
</code></pre>
<h1 id="or-restore-them-all-at-once">Or restore them all at once</h1>
<p>vms=( $(virsh list –all | grep running | awk ‘{print $2}’) )</p>
<p>for i in “${vms[@]}”; <br />
do <br />
virsh shutdown $i; <br />
virsh snapshot-revert –domain $i –snapshotname $i-fresh-install –running; <br />
virsh list –all; <br />
done</p>
TripleO deep dive session #11 (i18n)This is the 11th release of the TripleO “Deep Dive” sessions In this session we will have an update for the TripleO internationalization project for the TripleO UI, gladly presented by Julie Pichon. So please, check the full session content...2017-07-07T00:00:00+00:00https://www.pubstack.com/blog/2017/07/07/tripleo-deep-dive-session-11Carlos Camacho<p>This is the 11th release of the <a href="http://www.tripleo.org/">TripleO</a> “Deep Dive” sessions</p>
<p>In this session we will have
an update for the TripleO
internationalization project
for the TripleO UI,
gladly presented by Julie Pichon.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=dmAw7b2yUEo">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/dmAw7b2yUEo" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
OpenStack versions - Upstream/DownstreamI’m adding this note as I’m prune to forget how upstream and downstream versions are matching. RHOS Version 0 = Diablo RHOS Version 1 = Essex RHOS Version 2 = Folsom RHOS Version 3 = Grizzly RHOS Version 4 =...2017-06-27T00:00:00+00:00https://www.pubstack.com/blog/2017/06/27/openstack-versions-upstream-downstreamCarlos Camacho<p>I’m adding this note as I’m prune
to forget how upstream and downstream
versions are matching.</p>
<ul>
<li>RHOS Version 0 = Diablo</li>
<li>RHOS Version 1 = Essex</li>
<li>RHOS Version 2 = Folsom</li>
<li>RHOS Version 3 = Grizzly</li>
<li>RHOS Version 4 = Havana</li>
<li>RHOS Version 5 = Icehouse</li>
<li>RHOS Version 6 = Juno</li>
<li>RHOS Version 7 = Kilo</li>
<li>RHOS Version 8 = Liberty</li>
<li>RHOS Version 9 = Mitaka</li>
<li>RHOS Version 10 = Newton</li>
<li>RHOS Version 11 = Ocata</li>
<li>RHOS Version 12 = Pike</li>
<li>RHOS Version 13 = Queens</li>
<li>RHOS Version 14 = R</li>
<li>RHOS Version 15 = S</li>
</ul>
Ph.inally D.one! - EspañolEste artículo resume mi experiencia a lo largo de los últimos 5 años durante el desarrollo de mi tesis doctoral. Principalmente, son algunos consejos y tips de lo que disfruté y sufrí. Estoy de acuerdo en que mis condiciones no...2017-06-20T00:00:00+00:00https://www.pubstack.com/blog/2017/06/20/Ph-inally-D-one-espCarlos Camacho<p>Este artículo resume mi experiencia a lo largo de los últimos 5 años
durante el desarrollo de mi tesis doctoral.
Principalmente, son algunos consejos y tips de lo que disfruté y sufrí.
Estoy de acuerdo en que mis condiciones no son iguales a las de todos los
estudiantes de doctorado, por lo que este artículo se basa exclusivamente en
mi opinión y experiencia personal.</p>
<div style="float: left; width: 230px; background: white;"><img src="/static/phd/blocks.jpg" alt="" style="border:15px solid #FFF" /></div>
<p><strong>Divide y vencerás.</strong>
Separa tu trabajo de investigación en bloques
en los que puedas trabajar uno a la vez, es importante definir
estos bloques al comienzo del proyecto de investigación aunque sea
de manera tentativa. Esto te permitirá descomponer tu trabajo de
forma que puedas escribir, por ejemplo, un artículo por cada uno
de estos bloques.</p>
<div style="float: right; width: 230px; background: white;"><img src="/static/phd/papers-to-me.png" alt="" style="border:15px solid #FFF" /></div>
<p><strong>Escribe artículos como si no existiera un mañana.</strong>
Verídico,
para terminar tu tesis doctoral
debes contar con un mínimo de artículos de investigación para que
el tribunal considere ‘apto’ tu trabajo.
Usualmente, debes tener 1 en el primer cuartil como requisito para
presentarla en modo monografía y 3 para presentarla por artículos.</p>
<div style="float: left; width: 230px; background: white;"><img src="/static/phd/allow.gif" alt="" style="border:15px solid #FFF" /></div>
<p>Entonces, ¿Por qué no basas todo tu trabajo en escribir estos papers?
Desde el primer minuto comienza con tu plantilla de latex y
utilízala para escribir todo lo relacionado a esa sección de tu trabajo
de investigación.
Es sumamente importante tener en cuenta
las revistas donde envias
tus artículos, sólo envía a revistas
que estén en los cuartiles 1 y 2.</p>
<div style="float: right; width: 230px; background: white;"><img src="/static/phd/free-time.jpg" alt="" style="border:15px solid #FFF" /></div>
<p><strong>No olvides tu tiempo libre.</strong>
Es de las cosas que debes tener en cuenta
para que sea lo mas agradable
el transcurso de tu doctorado.
No tener tiempo libre te frustra y te quita las
ganas de seguir avanzando. Intenta mantener
tu vida social sin importar que. En mi caso particular, siempre intenté dedicar 2 horas al día
después del trabajo a actividades de investigación, y cada 2 fines de semana uno para hacer
actividades duras como programar, escribir, etc…</p>
<div style="float: left; width: 230px; background: white;"><img src="/static/phd/fuck-this-shit.jpg" alt="" style="border:15px solid #FFF" /></div>
<p><strong>Libre de frustraciones.</strong>
Seguro que les pasará por la mente esto de ‘Abandonar’…
¡Nunca! ¿Creen que merece la pena
abandonar el doctorado habiendo invertido
tu vida (el tiempo nadie te lo va a devolver),
sudor y lágrimas?
¡Así que ánimo!
Lo peor que podría suceder es que te tome
más tiempo del que habías planificado.
En mi caso particular el
Programa de Doctorado de Ingeniería Informática
regulado por el RD 1393/2007 al cual pertenezco,
se extinguirá a finales del curso actual (2017).
Así que, ni abandonamos ni nos
retrasamos.
¡Hay que trabajar que ya queda poco!</p>
<div style="float: right; width: 230px; background: white;"><img src="/static/phd/persistence.png" alt="" style="border:15px solid #FFF" /></div>
<p><strong>Persiste.</strong>
Terminar tu Ph.D. es trabajo de
resistencia, eventualmente, si sigues investigando podrás
leer tu tesis.
No renuncies y seguramente conseguirás
tu objetivo.
Ten en cuenta que
terminar el doctorado se basa en acumular
una cantidad considerable
de trabajo de investigación a medida
que pasa el tiempo, nadie podrá refutar
los artículos que ya publicaste con lo que solo debes ‘aguantar’
y por supuesto disfrutar de lo que haces.</p>
<div style="float: left; width: 230px; background: white;"><img src="/static/phd/im-fine.gif" alt="" style="border:15px solid #FFF" /></div>
<p><strong>Esto es un trabajo.</strong>
Las personas suelen confundir
el trabajo que haces en tu Ph.D. con ‘estudiar’.
Suelen preguntar que tal lo llevas como si estudiaras
para sacarte el carnet de conducir ¡Esto es serio!
Es trabajo que lleva esfuerzo y dedicación como
cualquier otro y en muchos casos, mucho más,
ya que incluye
tardes, noches y fines de semana.</p>
<div style="float: right; width: 230px; background: white;"><img src="/static/phd/great.gif" alt="" style="border:15px solid #FFF" /></div>
<p><strong>Que no te vean llorar.</strong>
Llora, llora todo lo que quieras, pero
que no te vean.
En mi caso, al preguntarme mis amigos y familia
sobre ‘la uni’
solía recordar lo poco que había hecho
en los últimos días y sólo me quedaba llorar en
silencio. Jajaj, recuerdo esta pregunta
¿Cómo llevas el doctorado?
Como si me echaran agua fría…</p>
<div style="float: left; width: 230px; background: white;"><img src="/static/phd/show-me-the-money.gif" alt="" style="border:15px solid #FFF" /></div>
<p><strong>El doctorando necesita comer.</strong>
España valora inmensamente a sus doctorandos, les
paga un sueldo justo por su ‘colaboración’ al
desarrollo de la humanidad y más aun, los
motiva para que sigan en el camino que llevan. Quizá
he exagerado un poco, la situación real
sería mas cercana a que están…
mal pagados, luchando por becas,
sufriendo recortes,
disfrutando de la
muy común falta de presupuesto,
paro, incremento de tasas académicas..
Todos necesitamos comer y es la razón por la que
decidí tener un trabajo donde valoren
de manera justa mi esfuerzo, esto
a parte de mi ‘trabajo’ como investigador,
trabajo por el que nadie me paga y
por el contrario, tengo que pagar yo.
Lamentablemente, el mercado laboral de los ‘Doctores’
es bastante limitado en España pero el
esfuerzo siempre se verá recompensado, al menos
podrás poner que eres doctor al comprar billetes
de Ryanair…</p>
<div style="float: right; width: 230px; background: white;"><img src="/static/phd/done-dissertation.jpg" alt="" style="border:15px solid #FFF" /></div>
<p><strong>Se realista y objetivo.</strong>
Reflexiona sobre el esfuerzo que ha tomado tu trabajo
de investigación y no dejes que nadie lo
menosprecie, al final del día, tu serás doctor.
Siempre podrás corregir erratas, hacer mas casos de uso,
implementaciones esotéricas. Buah… Se te puede ir la vida
entera generando conocimiento, pero, una vez hayas cumplido
con esos ‘bloques’ de los que
hablamos antes… ¡A defender la tesis!</p>
<div style="float: left; width: 230px; background: white;"><img src="/static/phd/plan.jpg" alt="" style="border:15px solid #FFF" /></div>
<p>Ten en cuenta que por muy bien que hayas hecho tu ‘planning’
tendrás infinitos imprevistos. Así que un plan ‘B’ siempre
ayuda a pasar obstáculos. Ten en cuenta los plazos de tu programa
de doctorado para que no te lleves ‘sorpresas’ desagradables.
Es mejor saber antes los problemas que tendrás a que no puedas
cumplir con los plazos que tienes previstos.
Mi consejo, cada vez que tengas un artículo
publicado trabaja en montarlo en la memoria
de tu tesis, es bastante trabajo y es mejor ir
sacandolo poco a poco.</p>
<div style="float: right; width: 230px; background: white;"><img src="/static/phd/please.gif" alt="" style="border:15px solid #FFF" /></div>
<p><strong>¿En Español o Inglés, por artículos o monografía?</strong>
Es una pregunta difícil, es importante tener un plan
‘B’ en caso que tengas un deadline para entregar tu
memoria. Los artículos que tengas en WIP (work in progress)
siempre podrás terminarlos en tu post-doc o en tu tiempo
libre. En mi caso particular, teniendo el Español como
lengua nativa, he preferido escribir la memoria entera
en Español por dos razones. La primera razón
es que podré escribir
mucho más rápido y la segunda porque
publicar un artículo suele tardar de media unos 6 meses,
en mi caso tengo el tercer artículo todavía en
revisión por parte de la revista.
Este plan ‘B’ me ha salvado de
poder leer la tesis en los plazos correspondientes.</p>
<div style="float: left; width: 230px; background: white;"><img src="/static/phd/notes.gif" alt="" style="border:15px solid #FFF" /></div>
<p><strong>Toma notas para la defensa.</strong>
Parece sencillo, pero algo que probablemente suceda es que
no recuerdes por un segundo algo que te han preguntado.
Por eso, es importante preparar material para la ronda
de preguntas, para cuando te pidan que vayas a una página
específica y aclares algo que no se ha entendido bien.</p>
<div style="float: right; width: 230px; background: white;"><img src="/static/phd/notas.jpg" alt="" style="border:15px solid #FFF" /></div>
<p>En mi caso imprimí la memoria página por hoja y apunté
en cada hoja por la cara en blanco, notas que creía relevantes.
Truco sencillo
y fiable para no bloquearnos en esos momentos donde un
‘No recuerdo’ no está permitido.</p>
<div style="float: left; width: 230px; background: white;"><img src="/static/phd/phinally_done.jpg" alt="" style="border:15px solid #FFF" /></div>
<p><strong>Al fín.</strong> Tal día como hoy 20 de Junio de 2017, si la
fuerza me acompaña seré doctor… ¡Yeah motherfuckers!
Al parecer todo ha llegado a buen término y hoy a las 11hrs CET
leeré mi tesis doctoral en la Facultad de Informática de la UCM.
Solo puedo concluir que volvería a hacerlo
sin duda alguna
si estuviera en la posición que estuve hace 5 años.
Pero, no repetiría (hacer otro Ph.D), sinceramente…</p>
<div style="float: right; width: 230px; background: white;"><img src="/static/phd/quack-motherfucker.jpg" alt="" style="border:15px solid #FFF" /></div>
<p>Hacer el doctorado no sólo te permite crecer
en el ámbito profesional, también, te permite conocer
personas fantásticas que seguramente te acompañarán
por el resto de tu vida y se convertirán en buenos amigos,
parejas, etc…</p>
<p>Por si hay alguien interesado en dar un vistazo a mi trabajo de investigación,
está disponible públicamente en
<a href="http://ccamacho.github.io/phd">ccamacho.github.io/phd</a>.
Y mi perfil profesional como de costumbre en <a href="https://www.linkedin.com/in/ccamacho-/?locale=en_US">LinkedIn</a>.</p>
<p>Dr. Carlos Camacho ☚ (<‿<)☚</p>
<p><img src="/static/phd/cover.jpg" alt="" /></p>
<p>P.S. Sólo por si os parece curioso, aclaro el
origen de las <a href="https://es.wikipedia.org/wiki/Matrioshka">Matrioshkas</a> de la portada.
Estas, representan uno de los principales
logros del trabajo de investigación
y en la portada, una metáfora…
Mi tesis plantea un conjunto de relaciones de
<em>equivalencia</em> para simplificar los términos del álgebra y reducirlos a sus
<em>formas normales</em>. Estas formas normales permiten entre otras cosas,
eliminar aquellos operadores combinatorios que hacen que la implementación de
la semántica operacional y denotacional sea ‘improcesable’.</p>
TripleO deep dive session indexThis is a brief index with all TripleO deep dive sessions, you can see all videos on the TripleO YouTube channel. Sessions index: * TripleO deep dive #1 (Quickstart deployment) * TripleO deep dive #2 (TripleO Heat Templates)...2017-06-15T00:00:00+00:00https://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-indexCarlos Camacho<p>This is a brief index with all TripleO deep dive sessions,
you can see all videos on the
<a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<blockquote>
<p>Sessions index:</p>
<p> * <a href="http://www.pubstack.com/blog/2016/07/11/tripleo-deep-dive-session-1.html">TripleO deep dive #1 (Quickstart deployment)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2016/07/18/tripleo-deep-dive-session-2.html">TripleO deep dive #2 (TripleO Heat Templates)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2016/07/22/tripleo-deep-dive-session-3.html">TripleO deep dive #3 (Overcloud deployment debugging)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2016/08/01/tripleo-deep-dive-session-4.html">TripleO deep dive #4 (Puppet modules)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2016/08/05/tripleo-deep-dive-session-5.html">TripleO deep dive #5 (Undercloud - Under the hood)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2016/08/15/tripleo-deep-dive-session-6.html">TripleO deep dive #6 (Overcloud - Physical network)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2017/01/16/tripleo-deep-dive-session-7.html">TripleO deep dive #7 (Undercloud - TripleO UI)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2017/05/04/tripleo-deep-dive-session-8.html">TripleO deep dive #8 (TripleO - Deployed server)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2017/05/05/tripleo-deep-dive-session-9.html">TripleO deep dive #9 (TripleO - Quickstart)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-10.html">TripleO deep dive #10 (TripleO - Containers)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2017/07/07/tripleo-deep-dive-session-11.html">TripleO deep dive #11 (TripleO - i18n)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2018/02/23/tripleo-deep-dive-session-12.html">TripleO deep dive #12 (TripleO - config-download)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2018/05/31/tripleo-deep-dive-session-13.html">TripleO deep dive #13 (TripleO - Containerized Undercloud)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2020/02/18/tripleo-deep-dive-session-14.html">TripleO deep dive #14 (TripleO - Containerized deployments without Paunch)</a></p>
</blockquote>
TripleO deep dive session #10 (Containers)This is the 10th release of the TripleO “Deep Dive” sessions In this session we will have an update for the TripleO containers effort, thanks to Jiri Stransky. So please, check the full session content on the TripleO YouTube channel....2017-06-15T00:00:00+00:00https://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-10Carlos Camacho<p>This is the 10th release of the <a href="http://www.tripleo.org/">TripleO</a> “Deep Dive” sessions</p>
<p>In this session we will have
an update for the TripleO
containers effort, thanks
to Jiri Stransky.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=xhTwHfi65p8">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/xhTwHfi65p8" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
Git merge!!!Git merge!!!
Explanation: No comments.
Disclaimer.
2017-06-13T00:00:00+00:00https://www.pubstack.com/blog/2017/06/13/podCarlos Camacho<p>Git merge!!!</p>
<p><img src="/static/pod/2017-06-13-git_merge.gif" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
Map, filter and reduce explainedMap, filter and reduce explained
Explanation: No comments.
Disclaimer.
2017-06-12T00:00:00+00:00https://www.pubstack.com/blog/2017/06/12/podCarlos Camacho<p>Map, filter and reduce explained</p>
<p><img src="/static/pod/2017-06-12-map_filter_reduce.jpg" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
Software Engineer taxonomySoftware Engineer taxonomy
Explanation: No comments.
Disclaimer.
2017-06-01T00:00:00+00:00https://www.pubstack.com/blog/2017/06/01/podCarlos Camacho<p>Software Engineer taxonomy</p>
<p><img src="/static/pod/2017-06-01-software-engineer.jpg" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
Converting AAX audiobooks to MP3This is a quick guide to convert AAX files (DRMed audiobooks) to it’s MP3 equivalent. I just got ‘The Phoenix Project’ book from amazon.es, it was on sale together with it’s audible audiobook version.. The thing is that I don’t...2017-05-31T00:00:00+00:00https://www.pubstack.com/blog/2017/05/31/convert-aax-to-mp3Carlos Camacho<p>This is a quick guide to convert AAX files (DRMed audiobooks) to it’s MP3 equivalent.</p>
<p>I just got ‘The Phoenix Project’ book from <a href="https://www.amazon.es/Phoenix-Project-DevOps-Helping-Business/dp/0988262509">amazon.es</a>,
it was on sale together with it’s audible audiobook version..</p>
<p>The thing is that I don’t want to install any additional
software and I just want to listen the MP3 version in my phone,
and no it isn’t a smartphone.. And to keep a sort of backup
in something different than this mumbo-jumbo AAX stuff.</p>
<p>So this is a small copy/edit/paste recipe to convert the AAX files
to MP3… Working fine and really easy to achieve…</p>
<pre><code> git clone https://github.com/inAudible-NG/tables.git
git clone https://github.com/KrumpetPirate/AAXtoMP3.git
wget http://project-rainbowcrack.com/rainbowcrack-1.7-linux64.zip
unzip rainbowcrack-1.7-linux64.zip
mv AAXtoMP3/* tables/
mv rainbowcrack-1.7-linux64/* tables/
mv <your_aax_file_name>.aax tables/
cd tables
make
ffprobe <your_aax_file_name>.aax
# Get the "[aax] file checksum"
./rcrack . -h <your_checksum>
# Get the activation bytes hex
bash AAXtoMP3 <your_activation_bytes> <your_aax_file_name>.aax
# It will be generated an audiobook with the MP3 files in it.
# *-*-Enjoy-*-*
</code></pre>
<p>This is basically all you need to convert your AAX file to MP3.</p>
<p>And no, I won’t share with you my copy of the
‘The Phoenix Project’ DeDRMed audiobook…</p>
<blockquote>
<p>Disclaimer:</p>
<p>The purpose of this recipe is to be able
to create backups of your audio books and be able to
play them on other non DRM capable players. Please do
not share or upload your DeDRMed files with anyone else.</p>
</blockquote>
TripleO deep dive session #9 (TripleO - Quickstart)This is the ninth release of the TripleO “Deep Dive” sessions In this session we will have an overall description for TripleO Quickstart, thanks to Gabriele Cerami. So please, check the full session content on the TripleO YouTube channel. Please...2017-05-05T00:00:00+00:00https://www.pubstack.com/blog/2017/05/05/tripleo-deep-dive-session-9Carlos Camacho<p>This is the ninth release of the <a href="http://www.tripleo.org/">TripleO</a> “Deep Dive” sessions</p>
<p>In this session we will have an overall
description for TripleO Quickstart, thanks
to Gabriele Cerami.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=PwHEgHJ9ePU">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/PwHEgHJ9ePU" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
TripleO deep dive session #8 (TripleO - Deployed server)This is the eight release of the TripleO “Deep Dive” sessions In this session we will have a full description about the deployed server feature in TripleO thanks to James Slagle. So please, check the full session content on the...2017-05-04T21:00:00+00:00https://www.pubstack.com/blog/2017/05/04/tripleo-deep-dive-session-8Carlos Camacho<p>This is the eight release of the <a href="http://www.tripleo.org/">TripleO</a> “Deep Dive” sessions</p>
<p>In this session we will have a full description
about the deployed server feature in TripleO thanks
to James Slagle.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=s8Hm4n9IjYg">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/s8Hm4n9IjYg" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
The Seven Circles of Developer HellThe Seven Circles of Developer Hell
Explanation: From Toggl.
Disclaimer.
2017-03-07T00:00:00+00:00https://www.pubstack.com/blog/2017/03/07/podCarlos Camacho<p>The Seven Circles of Developer Hell</p>
<p><img src="/static/pod/2017-03-07-7-circles-of-developer-hell.jpg" alt="" /></p>
<p>Explanation: From <a href="https://blog.toggl.com/2017/02/seven-circles-of-developer-hell/">Toggl</a>.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
Installing TripleO QuickstartThis is a brief recipe about how to manually install TripleO Quickstart in a remote 32GB RAM box and not dying trying it. Now instack-virt-setup is deprecated :( :( :( so the manual process needs to evolve and use OOOQ...2017-02-24T00:00:00+00:00https://www.pubstack.com/blog/2017/02/24/install-tripleo-quickstartCarlos Camacho<p>This is a brief recipe about how to
manually install TripleO Quickstart in a remote
32GB RAM box and not dying trying it.</p>
<p>Now <code>instack-virt-setup</code> is deprecated :( :( :(
so the manual process needs to evolve and use OOOQ (TripleO Quickstart).</p>
<p>This post is a brief recipe about how to provision the Hypervisor node
and deploy an end-to-end development environment
based on TripleO-Quickstart.</p>
<p>From the hypervisor run:</p>
<pre><code class="language-bash"># In this dev. env. /var is only 50GB, so I will create
# a sym link to another location with more capacity.
# It will take easily more tan 50GB deploying a 3+1 overcloud
sudo mkdir -p /home/libvirt/
sudo ln -sf /home/libvirt/ /var/lib/libvirt
# Add default toor user
sudo useradd toor
echo "toor:toor" | chpasswd
echo "toor ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/toor
sudo chmod 0440 /etc/sudoers.d/toor
sudo yum install -y lvm2 lvm2-devel
su - toor
whoami
sudo yum groupinstall "Virtualization Host" -y
sudo yum install git -y
# Disable requiretty otherwise the deployment will fail...
sudo sed -i -e 's/Defaults[ \t]*requiretty/#Defaults requiretty/g' /etc/sudoers
cd
mkdir .ssh
ssh-keygen -t rsa -N "" -f .ssh/id_rsa
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
sudo bash -c "cat .ssh/id_rsa.pub >> /root/.ssh/authorized_keys"
sudo bash -c "echo '127.0.0.1 127.0.0.2' >> /etc/hosts"
export VIRTHOST=127.0.0.2
ssh root@$VIRTHOST uname -a
</code></pre>
<p>Now, we can start deploying TripleO Quickstart by following:</p>
<pre><code class="language-bash"># Source: http://rdo-ci-doc.usersys.redhat.com/docs/tripleo-environments/en/latest/oooq-downstream.html
# Downstream bits for OSP8 ...
# cd
# sudo yum -y install /usr/bin/c_rehash ca-certificates
# sudo update-ca-trust check
# sudo update-ca-trust force-enable
# sudo update-ca-trust enable
# wget cert.pem
# sudo cp cert.pem /etc/pki/tls/certs/
# sudo cp cert.pem /etc/pki/ca-trust/source/anchors/
# sudo c_rehash
# sudo update-ca-trust extract
# git clone https://github.com/openstack/tripleo-quickstart
# cd tripleo-quickstart
# wget http://rhos-release.virt.bos.redhat.com/ci-images/internal-requirements-new.txt
# cd
# chmod u+x ./tripleo-quickstart/quickstart.sh
# bash ./tripleo-quickstart/quickstart.sh --install-deps
# bash ./tripleo-quickstart/quickstart.sh -v --release rhos-8-baseos-undercloud --clean --teardown all --requirements "/home/toor/tripleo-quickstart/internal-requirements-new.txt" $VIRTHOST
</code></pre>
<pre><code class="language-bash">git clone https://github.com/openstack/tripleo-quickstart
chmod u+x ./tripleo-quickstart/quickstart.sh
printf "\n\nSee:\n./tripleo-quickstart/quickstart.sh --help for a full list of options\n\n"
bash ./tripleo-quickstart/quickstart.sh --install-deps
export VIRTHOST=127.0.0.2
export CONFIG=~/deploy-config.yaml
cat > $CONFIG << EOF
# undercloud_undercloud_hostname: undercloud.ratata-domain
# control_memory: 8192
# compute_memory: 6120
# undercloud_memory: 10240
# undercloud_vcpu: 4
# undercloud_workers: 3
# default_vcpu: 1
custom_nameserver: '10.16.36.29'
undercloud_undercloud_nameservers: '10.16.36.29'
overcloud_dns_servers: '10.16.36.29'
# node_count: 4
# overcloud_nodes:
# - name: control_0
# flavor: control
# virtualbmc_port: 6230
# - name: control_1
# flavor: control
# virtualbmc_port: 6231
# - name: control_2
# flavor: control
# virtualbmc_port: 6232
# - name: compute_0
# flavor: compute
# virtualbmc_port: 6233
# topology: >-
# --control-scale 3
# --compute-scale 1
extra_args: >-
--libvirt-type qemu
--ntp-server pool.ntp.org
# -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml
run_tempest: false
EOF
# We disable SELINUX as it breaks the deployment
# You will get some permission denied when running
# the Ansible playbooks
sudo setenforce 0
bash ./tripleo-quickstart/quickstart.sh \
--clean \
--release master \
--teardown all \
--tags all \
-e @$CONFIG \
$VIRTHOST
</code></pre>
<p>In the hypervisor run the following command to log-in in
the Undercloud:</p>
<pre><code class="language-bash">ssh -F /home/toor/.quickstart/ssh.config.ansible undercloud
# Add the TRIPLEO_ROOT var to stackrc
# to use with tripleo-ci
echo "export TRIPLEO_ROOT=~" >> stackrc
source stackrc
</code></pre>
<p>At this point you should have your development environment deployed correctly.</p>
<p>Clone the tripleo-ci repo:</p>
<pre><code class="language-bash">git clone https://github.com/openstack-infra/tripleo-ci
</code></pre>
<p>And, run the Overcloud pingtest:</p>
<pre><code class="language-bash">~/tripleo-ci/scripts/tripleo.sh --overcloud-pingtest
</code></pre>
<p>Enjoy TripleOing (~˘▾˘)~</p>
<p>Note: I had to execute the deployment command 3/4 times to have
an OK deployment, sometimes it just fails (i.e. timeout getting the images).</p>
<p>Note: From the host, <code>virsh list --all</code> will work only as the stack user.</p>
<p>Note: Each time you run the quickstart.sh command from the hypervisor
the UC and OC will be nuked (<code>--teardown all</code>), you will see tasks like ‘PLAY [Tear down undercloud and overcloud vms] **’.</p>
<p>Note: If you delete the Overcloud i.e. using <code>heat stack-delete overcloud</code> you can re-deploy what you
had by running the dynamically generated overcloud-deploy.sh script in the stack home folder from the UC.</p>
<p>Note: There are several options for TripleO Quickstart besides the basic
virthost deployment, check them here: <code>https://docs.openstack.org/developer/tripleo-quickstart/working-with-extras.html</code></p>
<div style="font-size:10px">
<blockquote>
<p><strong>Updated 2017/03/17:</strong> Bleh, had to execute several times the deployment command to have it working.. :/ I miss you instack-virt-setup</p>
<p><strong>Updated 2017/03/16:</strong> The --config option seems to be broken, using instead -e @~/deploy-config.yaml.</p>
<p><strong>Updated 2017/03/14:</strong> New workflow added.</p>
<p><strong>Updated 2017/02/27:</strong> Working fine.</p>
<p><strong>Updated 2017/02/23:</strong> Seems to work.</p>
<p><strong>Updated 2017/02/23:</strong> instack-virt-setup is deprecatred :( moving to tripleo-quickstart.</p>
</blockquote>
</div>
I am not a robotI am not a robot
Explanation: No comments.
Disclaimer.
2017-01-27T00:00:00+00:00https://www.pubstack.com/blog/2017/01/27/podCarlos Camacho<p>I am not a robot</p>
<p><img src="/static/pod/2017-01-27-not-a-robot.gif" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
OpenStack and services for BigData applicationsYesterday I had the opportunity of presenting together with Daniel Mellado a brief talk about OpenStack and it’s integration with services to support Big Data tools (OpenStack Sahara). It was a combined talk for two Meetups MAD-for-OpenStack and Data-Science-Madrid. The...2017-01-26T00:00:00+00:00https://www.pubstack.com/blog/2017/01/26/mad-for-openstack-meetupCarlos Camacho<p>Yesterday I had the opportunity of presenting together with Daniel Mellado
a brief talk about OpenStack and it’s integration with services to support
Big Data tools (OpenStack Sahara).</p>
<p><img src="/static/meetup_openstack.png" alt="" /></p>
<p>It was a combined talk for two Meetups
<a href="https://www.meetup.com/es-ES/MAD-for-OpenStack/events/237131857/">MAD-for-OpenStack</a>
and
<a href="https://www.meetup.com/es-ES/Data-Science-Madrid/events/236991190/">Data-Science-Madrid</a>.</p>
<p>The presentation is stored in
<a href="https://github.com/ccamacho/openstack-presentations/tree/master/2017-01-25-meetup-openstack101-bigdata">GitHub</a>.</p>
<p>Regrets:</p>
<ul>
<li>We prepared a 1 hour presentation that had to be presented in 20min.</li>
<li>Wasn’t able to have access to our demo server.</li>
</ul>
TripleO deep dive session #7 (Undercloud - TripleO UI)This is the seven release of the TripleO “Deep Dive” sessions In this session Liz Blanchard and Ana Krivokapic will give us some bits about how to contribute to the TripleO UI project. Once checking this session we will have...2017-01-16T16:00:00+00:00https://www.pubstack.com/blog/2017/01/16/tripleo-deep-dive-session-7Carlos Camacho<p>This is the seven release of the <a href="http://www.tripleo.org/">TripleO</a> “Deep Dive” sessions</p>
<p>In this session Liz Blanchard and Ana Krivokapic will give us some
bits about how to contribute to the <a href="https://github.com/openstack/tripleo-ui">TripleO UI</a> project.
Once checking this session we will have a general overview about the project’s
history, properties, architecture and contributing steps.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=9TseONVfLR8">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/9TseONVfLR8" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Here you will be able to see a quick overview about how to install the UI as a development environment.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/1puSvUqTKzw" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>The summarized steps are also available in <a href="http://www.pubstack.com/blog/2017/01/13/installing-tripleo-ui.html">this</a> blog post.</p>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
Programmer realityProgrammer reality
Explanation: No comments.
Disclaimer.
2017-01-13T00:00:00+00:00https://www.pubstack.com/blog/2017/01/13/podCarlos Camacho<p>Programmer reality</p>
<p><img src="/static/pod/2017-01-13-programmer-reality.gif" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
Installing the TripleO UIThis is a brief recipe to use or install TripleO UI in the Undercloud. First, once installed the Undercloud, the TripleO UI is already available in the 3000 port. Let’s assume you have both root password for your development environment...2017-01-13T00:00:00+00:00https://www.pubstack.com/blog/2017/01/13/installing-tripleo-uiCarlos Camacho<p>This is a brief recipe to use or install TripleO UI
in the Undercloud.</p>
<p>First, once installed the Undercloud, the TripleO UI
is already available in the 3000 port.</p>
<p>Let’s assume you have both root password for your
development environment and the Undercloud node.</p>
<p>TripleO-UI queries directly the endpoints (i.e. keystone)
from your browser, so we need the traffic for the net
192.168.24.0/24 forwarded from your workstation to the
Undercloud node in order to reach all required ports
(6385, 5000, 8004, 8080, 9000, 8989, 3000, 13385, 13000, 13004, 13808, 9000, 13989 and 443).</p>
<p>Let’s install sshuttle in your workstation.</p>
<pre><code>sudo yum install -y sshuttle
</code></pre>
<p>Now, let’s get the Undercloud IP and configure SSH with a ProxyCommand.</p>
<pre><code>undercloudIp=`ssh root@labserver "arp -e" | grep brext | grep -v incomplete | awk '{print $1}' | sed 's/\/.*$//'`
cat << EOF >> ~/.ssh/config
Host lab
Hostname labserver
User root
Host uc
Hostname $undercloudIp
User root
ProxyCommand ssh -vvvv -W %h:%p root@lab
EOF
</code></pre>
<p>sshuttle will ask you for your hypervisor and Undercloud root
password.</p>
<p>To start forwarding the traffic execute:</p>
<pre><code>sshuttle -e "ssh -vvv" -r root@uc -vvvv 192.168.24.0/24
</code></pre>
<p>Once you have done this, open from your browser http://192.168.24.1:3000/
and the TripleO UI should be shown correctly.</p>
<hr />
<p>It’s probable that you receive an error like: <strong>Connection to Keystone is not available</strong>.</p>
<p>This is because you are trying to access the Keystone endpoint from your
workstation and it fails as the certificate is self-signed.
In order to fix this, open the developer view in your browser
and check the endpoint you are using to access keystone.
For example, https://192.168.24.2/keystone/v2.0/tokens
now open this URL in your browser and acept the certificate. If you
do this the Keystone error should go away.</p>
<hr />
<p>If you need a TripleO UI development environment follow:</p>
<p>The first step will be to install the TripleO UI and
all the dependencies.</p>
<pre><code class="language-bash"> cd
sudo yum install -y nodejs npm tmux
git clone https://github.com/openstack/tripleo-ui.git
cd tripleo-ui
npm install
</code></pre>
<p>Now, we need to update all the TripleO UI config files</p>
<pre><code class="language-bash"> cd
cp ~/tripleo-ui/config/tripleo_ui_config.js.sample ~/tripleo-ui/config/tripleo_ui_config.js
echo "Changing the default IP"
export ENDPOINT_ADDR=$(cat stackrc | grep OS_AUTH_URL= | awk -F':' '{print $2}'| tr -d /)
sed -i "s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/$ENDPOINT_ADDR/g" ~/tripleo-ui/config/tripleo_ui_config.js
echo "Removing comments for the keystone URI"
sed -i '/^ \/\/ '\''keystone'\''\:/s/^ \/\///' ~/tripleo-ui/config/tripleo_ui_config.js
echo "Removing comments for the zaqar_default_queue"
sed -i '/^ \/\/ '\''zaqar_default_queue'\''\:/s/^ \/\///' ~/tripleo-ui/config/tripleo_ui_config.js
# Uncomment all the parameters
# sed -i '/^ \/\/ '\''.*'\''\:/s/^ \/\///' ~/tripleo-ui/config/tripleo_ui_config.js
echo "Changing listening port for the dev server, 3000 already used"
sed -i '/port: 3000/s/3000/33000/' ~/tripleo-ui/webpack.dev.js
</code></pre>
<p>In the following step we will use tmux to persist the service running
for debugging purposes.</p>
<pre><code class="language-bash"> cd
tmux new -s tripleo-ui
cd ~/tripleo-ui/
npm start
</code></pre>
<p>At this stage you should have up and running your node server (33000 port).</p>
<p>If you followed the first step to see the default TripleO UI installation
go to log in the TripleO UI: http://192.168.24.1:33000/</p>
<p>Happy TripleOing!</p>
<div style="font-size:10px">
<blockquote>
<p><strong>Updated 2017/01/13:</strong> First version.</p>
<p><strong>Updated 2017/01/14:</strong> Add default TripleO UI info. Still getting 'Connection to Keystone is not available'
the config params are correct, checking it...</p>
<p><strong>Updated 2017/01/17:</strong> Forwarded all the required ports using sshuttle.</p>
</blockquote>
</div>
Printed TripleO cheatsheets for FOSDEM/DevConf (feedback needed)We are working preparing some cheatsheets for people jumping into TripleO. So there is an early WIP version for a few cheatsheets that we want to share: TripleO manual installation (Just a copy/paste step-wise process to install TripleO). Deployments -...2016-12-16T00:00:00+00:00https://www.pubstack.com/blog/2016/12/16/printing-tripleo-cheat-sheetCarlos Camacho<p>We are working preparing some cheatsheets for people jumping into TripleO.</p>
<p>So there is an early WIP version for a few cheatsheets that we want to share:</p>
<blockquote>
<p>TripleO manual installation (Just a copy/paste step-wise process to install TripleO).</p>
<p>Deployments - Debugging tips (Relevant commands to know what’s happening with the deployment).</p>
<p>Deployments - CI (URL’s and resources to check our CI status).</p>
<p>OOOQ installation (Also a step-wise recipe to install OOOQ, does not exist yet).</p>
</blockquote>
<p>We already have some drafts available in <a href="https://github.com/ccamacho/tripleo-cheatsheet">GitHub</a>.</p>
<p>So, we will like to have some feedback from the community and make a
stable version for the cheatsheets in the next week.</p>
<p>Feedback for adding/removing content and general reviews about all of
them is welcomed.</p>
<p>Thanks!!!!</p>
Testing composable upgradesThis is a brief recipe about how I’m testing composable upgrades O->P. Based on the original shardy’s notes from this link. The following steps are followed to upgrade your Overcloud from Ocata to latest master (Pike). Deploy latest master TripleO...2016-11-28T00:00:00+00:00https://www.pubstack.com/blog/2016/11/28/testing-composable-upgradesCarlos Camacho<p>This is a brief recipe about how I’m testing
composable upgrades O->P.</p>
<p>Based on the original shardy’s notes
from <a href="http://paste.openstack.org/show/590436/">this</a> link.</p>
<p>The following steps are followed to upgrade your Overcloud from Ocata to latest master (Pike).</p>
<ul>
<li>
<p>Deploy latest master TripleO following <a href="http://www.pubstack.com/blog/2016/07/04/manually-installing-tripleo-recipe.html">this</a> post.</p>
</li>
<li>
<p>Remove the current Overcloud deployment.</p>
</li>
</ul>
<pre><code> source stackrc
heat stack-delete overcloud
</code></pre>
<ul>
<li>Remove the Overcloud images and create new ones (for the Overcloud).</li>
</ul>
<pre><code> cd
openstack image list
openstack image delete <image_ID> #Delete all the Overcloud images overcloud-full*
rm -rf /home/stack/overcloud-full.*
export STABLE_RELEASE=ocata
export USE_DELOREAN_TRUNK=1
export DELOREAN_TRUNK_REPO="https://trunk.rdoproject.org/centos7-ocata/current/"
export DELOREAN_REPO_FILE="delorean.repo"
/home/stack/tripleo-ci/scripts/tripleo.sh --overcloud-images
# Or reuse images
# wget https://images.rdoproject.org/ocata/delorean/current-tripleo/stable/overcloud-full.tar
# tar -xvf overcloud-full.tar
# openstack overcloud image upload --update-existing
</code></pre>
<ul>
<li>Download Ocata tripleo-heat-templates.</li>
</ul>
<pre><code> cd
git clone -b stable/ocata https://github.com/openstack/tripleo-heat-templates tht-ocata
</code></pre>
<ul>
<li>Configure the DNS (needed when upgrading the Overcloud).</li>
</ul>
<pre><code> neutron subnet-update `neutron subnet-list | grep ctlplane-subnet | awk '{print $2}'` --dns-nameserver 192.168.122.1
</code></pre>
<ul>
<li>Deploy an Ocata Overcloud.</li>
</ul>
<pre><code> openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /home/stack/tht-ocata/ \
-e /home/stack/tht-ocata/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tht-ocata/environments/puppet-pacemaker.yaml
</code></pre>
<ul>
<li>Install prerequisites in nodes (if no DNS configured this will fail, so make sure they have Intenet access), check that your nodes can connect to Internet.</li>
</ul>
<pre><code>cat > upgrade_repos.yaml << EOF
parameter_defaults:
UpgradeInitCommand: |
set -e
#Master repositories
sudo curl -o /etc/yum.repos.d/delorean.repo https://trunk.rdoproject.org/centos7-master/current-passed-ci/delorean.repo
sudo curl -o /etc/yum.repos.d/delorean-deps.repo https://trunk.rdoproject.org/centos7/delorean-deps.repo
export HOME=/root
cd /root/
if [ ! -d tripleo-ci ]; then
git clone https://github.com/openstack-infra/tripleo-ci.git
else
pushd tripleo-ci
git checkout master
git pull
popd
fi
if [ ! -d tripleo-heat-templates ]; then
git clone https://github.com/openstack/tripleo-heat-templates.git
else
pushd tripleo-heat-templates
git checkout master
git pull
popd
fi
./tripleo-ci/scripts/tripleo.sh --repo-setup
sed -i "s/includepkgs=/includepkgs=python-heat-agent*,/" /etc/yum.repos.d/delorean-current.repo
#yum -y install python-heat-agent-ansible
yum install -y python-heat-agent-*
rm -f /usr/libexec/os-apply-config/templates/etc/puppet/hiera.yaml
rm -f /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles
rm -f /etc/puppet/hieradata/*.yaml
yum remove -y python-UcsSdk openstack-neutron-bigswitch-agent python-networking-bigswitch openstack-neutron-bigswitch-lldp python-networking-odl
crudini --set /etc/ansible/ansible.cfg DEFAULT library /usr/share/ansible-modules/
EOF
</code></pre>
<ul>
<li>Download master tripleo-heat-templates.</li>
</ul>
<pre><code> cd
git clone https://github.com/openstack/tripleo-heat-templates tht-master
</code></pre>
<ul>
<li>Upgrade Overcloud to master</li>
</ul>
<pre><code> cd
openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /home/stack/tht-master/ \
-e /home/stack/tht-master/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tht-master/environments/puppet-pacemaker.yaml \
-e /home/stack/tht-master/environments/major-upgrade-composable-steps.yaml \
-e upgrade_repos.yaml
</code></pre>
<p>Note: if upgrading to a containerized Overcloud (Pike and beyond) do:</p>
<pre><code>cat > docker_registry.yaml << EOF
parameter_defaults:
DockerNamespace: 192.168.24.1:8787/tripleoupstream
DockerNamespaceIsRegistry: true
EOF
# This will take some time...
openstack overcloud container image upload --config-file /usr/share/openstack-tripleo-common/container-images/overcloud_containers.yaml
openstack overcloud container image prepare \
--namespace tripleoupstream \
--tag latest \
--env-file docker-centos-tripleoupstream.yaml
cd
source ~/stackrc
export THT=/home/stack/tht-master
openstack overcloud deploy --templates $THT \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
-e $THT/overcloud-resource-registry-puppet.yaml \
-e $THT/environments/puppet-pacemaker.yaml \
-e $THT/environments/major-upgrade-composable-steps.yaml \
-e upgrade_repos.yaml \
-e $THT/environments/docker.yaml \
-e $THT/environments/docker-ha.yaml \
-e $THT/environments/major-upgrade-composable-steps-docker.yaml \
-e docker-centos-tripleoupstream.yaml \
-e docker_registry.yaml
</code></pre>
<ul>
<li>Run the converge step ** Not tested on the containerized upgrade **</li>
</ul>
<pre><code> cd
openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /home/stack/tht-master/ \
-e /home/stack/tht-master/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tht-master/environments/puppet-pacemaker.yaml \
-e /home/stack/tht-master/environments/major-upgrade-converge.yaml
</code></pre>
<p>If the last steps manage to finish successfully, you just have upgraded your Overcloud from Ocata to Pike (latest master).</p>
<p>For more resources related to TripleO deployments, check out the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA">TripleO YouTube channel</a>.</p>
<div style="font-size:10px">
<blockquote>
<p><strong>Updated 2017/01/28:</strong> Working fine.</p>
</blockquote>
</div>
Oh yeah, NES classic mini!I had the chance to find a treasure last week, the NES classic mini in game.es for it’s official price (Shame on you speculators!!!!). So.. I just loved this tiny console since I just opened the package. Lot of people...2016-11-27T00:00:00+00:00https://www.pubstack.com/blog/2016/11/27/nes-classic-miniCarlos Camacho<p>I had the chance to find a treasure last week, the NES classic mini in
<a href="https://www.game.es/nintendo-classic-mini-nes-electronica-129248">game.es</a>
for it’s official price (Shame on you speculators!!!!).</p>
<p><img src="/static/nes-mini-classic-2016-nintendo.jpg" alt="" /></p>
<p>So.. I just loved this tiny console since I just opened the package.
Lot of people complain about the controller cord being to short
(You have workarounds for this) so for me it’s not a problem.</p>
<p>It boots up in less than 5 seconds, shows a nice interface with
all the pre-loaded 30 games ready-to-play in a long list alphabetically
sorted and you will have 4 memory slots for each game.</p>
<p>Playing with this console will be like going back 20 years ago
but so much better, perfect pixels and a really good sound emulation.</p>
<p>Last statement from Nintendo:</p>
<blockquote>
<p>The Nintendo Entertainment System:
NES Classic Edition system is a hot item,
and we are working hard to keep up with consumer demand.
There will be a steady flow of additional systems through the holiday
shopping season and into the new year. Please contact your local
retailers to check product availability. A selection of participating
retailers can be found at www.Nintendo.com/nes-classic.</p>
</blockquote>
<p>You just need to ping retailers pages until you find stock again.</p>
<p>If you can get one, go for it (But don’t buy it from speculators)!</p>
TripleO cheatsheetThis is a cheatsheet some of my regularly used commands to test, develop or debug TripleO deployments. Deployments swift download overcloud Download the overcloud swift container files in the current folder (With the rendered j2 templates). heat resource-list --nested-depth 5...2016-11-26T00:00:00+00:00https://www.pubstack.com/blog/2016/11/26/tripleo-cheatsheetCarlos Camacho<p>This is a cheatsheet some of my regularly used
commands to test, develop or debug
TripleO deployments.</p>
<p>Deployments</p>
<ul>
<li>
<code class="highlighter-rouge">
swift download overcloud
</code><br />
<p class="tdesc">
Download the <code class="highlighter-rouge">overcloud</code> swift container files in the current folder (With the rendered j2 templates).
</p>
</li>
<li>
<code class="highlighter-rouge">
heat resource-list --nested-depth 5 overcloud | grep FAILED
</code><br />
<p class="tdesc">
Show resources, filtering to get those who have failed.
</p>
</li>
<li>
<code class="highlighter-rouge">
heat deployment-show <deployment_ID>
</code><br />
<p class="tdesc">
Get the deployment details for <deployment_ID>.
</p>
</li>
<li>
<code class="highlighter-rouge">
openstack image list
</code><br />
<p class="tdesc">
List images.
</p>
</li>
<li>
<code class="highlighter-rouge">
openstack image delete <image_ID>
</code><br />
<p class="tdesc">
Delete <image_ID>.
</p>
</li>
<li>
<code class="highlighter-rouge">
wget http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/<release>/delorean/overcloud-full.tar
</code><br />
<p class="tdesc">
Download <release> overcloud images tar file [liberty|mitaka|newton|...]
</p>
</li>
<li>
<code class="highlighter-rouge">
openstack overcloud image upload --update-existing
</code><br />
<p class="tdesc">
Once downloaded the images, this command will upload them to glance.
</p>
</li>
</ul>
<p>Debugging CI</p>
<ul>
<li>
<code class="highlighter-rouge">
http://status.openstack.org/zuul/
</code><br />
<p class="tdesc">
Check submissions CI status.
</p>
</li>
<li>
<code class="highlighter-rouge">
wget -e robots=off -r --no-parent <patch_ID>
</code><br />
<p class="tdesc">
Download all logs from <patch_ID>.
</p>
</li>
<li>
<code class="highlighter-rouge">
console.html
</code>
&
<code class="highlighter-rouge">
logs/postci.txt.gz
</code><br />
<p class="tdesc">
Relevant log files when debuging a TripleO Gerrit job.
</p>
</li>
</ul>
<blockquote>
<p>If you think there are more
useful commands to add to the list
just add a <a href="https://github.com/pubstack/pubstack.github.io/issues/22">comment</a>!</p>
</blockquote>
<p><strong>Happy TripleOing!</strong></p>
IT detoxIT detox
Explanation: No comments.
Disclaimer.
2016-11-25T00:00:00+00:00https://www.pubstack.com/blog/2016/11/25/podCarlos Camacho<p>IT detox</p>
<p><img src="/static/pod/2016-11-25-it-sicherheit-comic.jpg" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
The Venezuelan mugThe Venezuelan mug
Explanation: No comments.
Disclaimer.
2016-11-24T00:00:00+00:00https://www.pubstack.com/blog/2016/11/24/podCarlos Camacho<p>The Venezuelan mug</p>
<p><img src="/static/pod/2016-11-24-taza.png" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
I just don't like the new drafted TripleO logoI just don’t like the new drafted TripleO logo
Explanation: No comments.
Disclaimer.
2016-11-23T00:00:00+00:00https://www.pubstack.com/blog/2016/11/23/podCarlos Camacho<p>I just don’t like the new drafted TripleO logo</p>
<p><img src="/static/pod/2016-11-23-owl-angry.jpg" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
How to recover a commit from GitHub's ReflogWriting this blog post, suddenly and without knowing I ended up by squashing/removing the commit holding those changes. The first thing that came to my mind was to locally check the Reflog to restore the commit, but sadly I was...2016-11-23T00:00:00+00:00https://www.pubstack.com/blog/2016/11/23/how-to-recover-a-commit-from-github-reflogCarlos Camacho<p>Writing
<a href="http://www.pubstack.com/blog/2016/11/21/openstack-summit-2016-bcn.html">this</a> blog post,
suddenly and without knowing I ended up by squashing/removing the commit
holding those changes.</p>
<p><img src="/static/crashed-those-commits.png" alt="" /></p>
<p>The first thing that came to my mind was to locally check the Reflog to
restore the commit, but sadly I was removed the repo from my laptop and the
Reflog didn’t exist anymore. Then I went into panic as I didn’t wanted to
lose those hours that took me to write the post.
Then, I said… Does GitHub have Reflog? The sweet answer, yes..</p>
<p>So lets learn how to recover a commit from GitHub Reflog.</p>
<p>Relevant strings to fill:</p>
<ul>
<li><user>: The user holding the git repo.</li>
<li><repo>: The repository name.</li>
<li><recover-branch-name>: The remote branch that you will create.</li>
<li><sha-goes-here>: The commit sha to be restored.</li>
</ul>
<p>Let’s curl GitHub to get all commits in the Reflog:</p>
<pre><code>$ curl https://api.github.com/repos/<user>/<repo>/events
</code></pre>
<p>Now let’s create a remote branch with your commit.</p>
<p>Choose your commit sha and run:</p>
<pre><code>$ curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X POST -d '{"ref":"refs/heads/<recover-branch-name>", "sha":"<sha-goes-here>"}' https://api.github.com/repos/<user>/<repo>/git/refs
</code></pre>
<p>You should have created a branch called <recover-branch-name>
in your GitHub repo. Now you can safely cherry-pick it to your
master branch or fetch locally those changes and do
whatever you want with them.</recover-branch-name></p>
<p>So yeah, I have to admit that I <s>squashed</s>crashed those commits in some how…</p>
<p>I hope that last tips are useful if you are in a trouble like me
trying to recover some lost commits from GitHub.</p>
Humanity...Humanity…
Explanation: No comments.
Disclaimer.
2016-11-22T00:00:00+00:00https://www.pubstack.com/blog/2016/11/22/podCarlos Camacho<p>Humanity…</p>
<p><img src="/static/pod/2016-11-22-humanity-patch.jpg" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
Enabling nested KVM support for a instack-virt-setup deployment.The following bash snippet will enable nested KVM support in the host when deploying TripleO using instack-virt-setup. This will work in AMD or Intel architectures. #!/bin/bash echo "Checking if nested KVM is enabled in the host." ARCH=$(lscpu | grep Architecture...2016-11-21T12:30:00+00:00https://www.pubstack.com/blog/2016/11/21/enabling-nested-kvm-on-tripleo-hostCarlos Camacho<p>The following bash snippet will enable
nested KVM support in the host when deploying TripleO
using instack-virt-setup.</p>
<p>This will work in AMD or Intel architectures.</p>
<pre><code class="language-bash">#!/bin/bash
echo "Checking if nested KVM is enabled in the host."
ARCH=$(lscpu | grep Architecture | head -1 | awk '{print $2}')
if [[ $ARCH == 'x86_64' ]]; then
ARCH_BRAND=intel
KVM_STATUS_FILE=/sys/module/kvm_intel/parameters/nested
ENABLE_NESTED_KVM=Y
else
ARCH_BRAND=amd
KVM_STATUS_FILE=/sys/module/kvm_amd/parameters/nested
ENABLE_NESTED_KVM=1
fi
if [[ -f $KVM_STATUS_FILE ]]; then
KVM_CURRENT_STATUS=$(head -n 1 $KVM_STATUS_FILE)
if [[ "${KVM_CURRENT_STATUS^^}" -ne "${ENABLE_NESTED_KVM^^}" ]]; then
echo "This host does not have nested KVM enabled, enabling."
sudo rmmod kvm-$ARCH_BRAND
sudo sh -c "echo 'options kvm-$ARCH_BRAND nested=$ENABLE_NESTED_KVM' >> /etc/modprobe.d/dist.conf"
sudo modprobe kvm-$ARCH_BRAND
else
echo "Nested KVM support is already enabled."
fi
else
echo "$KVM_STATUS_FILE does not exist."
fi
</code></pre>
<p>By default nested virtualization with KVM is disabled in the
host, so in order to run the overcloud-pingtest correctly we have two
options. Either run the previous snippet on the host,
or, when deploying the Compute node in a virtual machine
add <code>--libvirt-type qemu</code> to the deployment command.
Otherwise launching instances on the deployed overcloud will fail.</p>
<p>Here you have an example of the deployment command, fixing
libvirt to qemu.</p>
<pre><code class="language-bash">cd
openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tripleo-heat-templates/environments/puppet-pacemaker.yaml
</code></pre>
<p>Have a happy TripleO deployment!</p>
Ocata OpenStack summit 2016 - BarcelonaA few weeks ago I had the opportunity to attend to the Barcelona OpenStack summit ‘Ocata design session’ and this post is related to collect some overall information about it. In order to achieve this, I’m crawling into my paper...2016-11-21T00:00:00+00:00https://www.pubstack.com/blog/2016/11/21/openstack-summit-2016-bcnCarlos Camacho<p>A few weeks ago I had the opportunity to attend to the Barcelona
OpenStack summit ‘Ocata design session’ and this post is related
to collect some overall information about it. In order to achieve this,
I’m crawling into my paper notes to highlight the aspects IMHO are relevant.</p>
<p><img src="/static/openstack-summit-2016-bcn.jpeg" alt="" /></p>
<hr />
<blockquote>
<p>Sessions list by date.</p>
</blockquote>
<h3 id="tuesday---oct-25th">Tuesday - Oct. 25th</h3>
<ul>
<li>RDO Booth: Carlos Camacho TripleO composable roles demo (12:15pm-12:55pm)</li>
<li>What the Heck is OoO: Owls All the Way Down (5:55pm – 6:35)</li>
</ul>
<h3 id="wednesday---oct-26th">Wednesday - Oct. 26th</h3>
<ul>
<li>Anomaly Detection in Contrail Networking (1:15pm-1:29pm)</li>
<li>Freezer: Plugin Architecture and Deduplication (3:05pm-3:45pm)</li>
<li>TripleO: Containers - Current Status and Roadmap (3:55pm-4:35pm)</li>
<li>TripleO: Work Session - Growing the team (5:05pm-5:45pm)</li>
<li>TripleO: Work Session - CI - current status and roadmap (5:55pm-6:35pm)</li>
</ul>
<h3 id="thursday---oct-27th">Thursday - Oct. 27th</h3>
<ul>
<li>Zuul v3: OpenStack and Ansible Native CI/CD (11:00am-11:40am)</li>
<li>The Latest in the Container World and the Role of Container in OpenStack (11:50am-12:30pm)</li>
<li>TripleO: Upgrades - current status and roadmap (1:50pm-2:30pm)</li>
<li>Mistral: Mistral and StackStorm (3:30pm-4:10pm)</li>
<li>Nokia: TOSCA & Mistral: Orchestrating End-to-End Telco Grade NFV (5:30pm-6:10pm)</li>
</ul>
<h3 id="friday---oct-28th">Friday - Oct. 28th</h3>
<ul>
<li>TripleO: Work Session - Composable Undercloud deployment with Heat (9:00am-9:20am)</li>
<li>TripleO: Work Session - GUI, CLI, Validations current status, roadmap, requirements (9:20am-9:40am)</li>
<li>TripleO: Work Session - Multiple topics - Blueprints, specs, tools and Ocata summary. (9:50am-10:30am)</li>
</ul>
<hr />
<p>Beyond the analysis of “What I did there in a week” I want
to state a few relevenat facts for me.</p>
<blockquote>
<p>Why is important to attend to a design session (My case: Upstream TripleO developer)?</p>
</blockquote>
<p>I think when working remotely in OpenStack projects, for example,
in such a complex project as TripleO is really hard to know what other
people are doing. So forth, design sessions force engineers to realize
about your peers work on different OpenStack projects or even in the same project ;)</p>
<p>This will give you some ideas for future features, new services to integrate,
issues that you might have in the future among many others. Also for TripleO
specific case if you are interested in working in a specific service, you can
get into those services sessions to know more about them.</p>
<blockquote>
<p>Where is the value for companies when sending engineers to design sessions?</p>
</blockquote>
<p>There might be several answers for this question, but I believe the overall
answer will be that sending engineers to the design sessions allow engineers
to be aligned based on companies goals, mostly for cases when several companies are
related to the same project. Also allow team members to know each others,
maybe this can be a soft benefit, for me as important as being aligned for
future features or architectural agreements.</p>
<blockquote>
<p>Is it really mandatory to send people to design sessions?</p>
</blockquote>
<p>I think this is not mandatory at all, but at the a relevant factor
might be that all knowledge/value generate in those sessions can
be delivered and processed by the rest of the team members.</p>
<blockquote>
<p>Do attendees gain value when attending to these design sessions?</p>
</blockquote>
<p>Off course they will gain lot of value:</p>
<ul>
<li>You might have a wrong impression about people on IRC, meeting them in person can change this dramatically ‘or not’.</li>
<li>Know better and engage with your team members and other peers.</li>
<li>Better alignment with your project goals.</li>
<li>Discuss blueprints and have a better understanding about the features life-cycle and roadmap.</li>
<li>Know what other people are doing.</li>
<li>Improve you overall knowledge about other projects (This time is for doing that “Not in your free time if you like it like me”).</li>
</ul>
<blockquote>
<p>If we are in a design session, are we in “working mode” or just in “learning mode”?</p>
</blockquote>
<p>Hardest question by far, I had the remorse of feeling that I was not working enough.
There were a lot of distractions if you wanted to actually make some time for coding or
reviewing submissions. But that’s the thing, I believe is good time to align and after
the summit you will always have time for coding :)</p>
<blockquote>
<p>What about some business alignment?</p>
</blockquote>
<p>I believe this is also an important factor to know about how OpenStack
evolves with the time, release announcements and how summits are actually evolving.</p>
Querying haproxy data using socat from CLICurrently (most users) I don’t have any way to check the haproxy status in a TripleO virtual deployment (via web-browser) if not previously created some tunnels enabled for that purpose. So let’s check some haproxy data from our controller. Check...2016-11-04T00:00:00+00:00https://www.pubstack.com/blog/2016/11/04/querying-haproxy-data-using-socat-from-cliCarlos Camacho<p>Currently (most users) I don’t have any way to check the haproxy status in a TripleO virtual deployment (via web-browser) if not previously created some tunnels enabled for that purpose.</p>
<p>So let’s check some haproxy data from our controller.</p>
<p>Check the controller IP:</p>
<pre><code class="language-bash">nova-list
</code></pre>
<p>Connect to the controller:</p>
<pre><code>ssh heat-admin@192.0.2.22
</code></pre>
<p>Now, we need to have installed socat:</p>
<pre><code>sudo yum install -y socat
</code></pre>
<p>By default haproxy it’s already configured to dump stats data to <code>/var/run/haproxy.sock</code>,
now let’s query haproxy get some data from it:</p>
<ul>
<li>Show details like haproxy version, PID, current connections, session rates, tasks, among others.</li>
</ul>
<pre><code>echo "show info" | socat unix-connect:/var/run/haproxy.sock stdio
</code></pre>
<ul>
<li>Echo the stats about all frontents and backends as a csv.</li>
</ul>
<pre><code>echo "show stat" | socat unix-connect:/var/run/haproxy.sock stdio
</code></pre>
<ul>
<li>Display information about errors if there are any.</li>
</ul>
<pre><code>echo "show errors" | socat unix-connect:/var/run/haproxy.sock stdio
</code></pre>
<ul>
<li>Display open sessions.</li>
</ul>
<pre><code>echo "show sess" | socat unix-connect:/var/run/haproxy.sock stdio
</code></pre>
How to un-brick a Sony Xperia S and install Oneofakind Android 6.0.1_r10Ok, after playing converting partitions to f2fs I made a huge mistake and corrupted the partitions table of my mobile.. Yeahp, I messed it up. The following notes are meant to be a reminder about how to follow this process...2016-10-08T00:00:00+00:00https://www.pubstack.com/blog/2016/10/08/un-brick-sony-xperia-sCarlos Camacho<p>Ok, after playing converting partitions to f2fs I made a huge mistake
and corrupted the partitions table of my mobile.. Yeahp, I messed it up.</p>
<p><img src="/static/fist.gif" alt="" /></p>
<p>The following notes are meant to be a reminder about how to follow this
process as it’s a cumbersome and time consuming task
to get the correct version of the firmware and tools needed.</p>
<p>A hard brick is the state of android device that occurs when your device
won’t boot at all i.e. no boot loop, no recovery and also not charging.</p>
<p>The steps in order to fix the phone will try to flash a new ROM and
upgrade it to Android 6.0.1.</p>
<h1 id="prerequisites">Prerequisites</h1>
<ul>
<li>Download all the required files from <a href="http://goo.gl/aBYq5w">here</a>
(TWRP, flashtool, flashtool drivers, OEM firmware, root exploit and
Oneofakind ROM).</li>
<li>Have a PC near to you.</li>
<li>A USB cable.</li>
<li>1 beer (Save it for the end of the tutorial).</li>
</ul>
<h1 id="part-1">Part 1</h1>
<p>The first part of the tutorial will get you a working
installation of Android 4.1 Jelly Bean in our bricked phone.</p>
<h2 id="install-flashtool">Install Flashtool</h2>
<p>Download and install Flashtool, this tool is in the package that
you should have downloaded from the prerequisites.</p>
<h2 id="install-flashtool-drivers-for-your-phone">Install flashtool drivers for your phone</h2>
<p>In order to install the flashtool drivers you need to run some
previous steps as Windows won’t let you install them as they are
not signed (Also inside the prerequisites package).</p>
<p>Run the following steps from your PC</p>
<ul>
<li>Press the Windows key + R together and in the ‘Run’ box type
<code>shutdown.exe /r /o /f /t 00</code></li>
<li>Now make the following selections to boot into
the Start Up Setting Screen:
Troubleshoot > Advanced options > Start Up Settings > Restart</li>
<li>Then, when the machine restarts, select number 7 i.e.
‘Disable driver signature enforcement’. Your machine will start
with Driver signing enforcement disabled until the next reboot.</li>
</ul>
<p>Now you can install the Flashtool drivers.</p>
<ul>
<li>Select from the options to install, flashmode drivers and fastboot driver.</li>
</ul>
<p>Windows will warn that the driver is not signed
and will require you to confirm the installation.
Once the installation is complete, reboot the machine.</p>
<h2 id="start-the-xperia-s-in-flashmode">Start the Xperia S in flashmode</h2>
<p>Switch off your Xperia S first, then
press and hold the <code>Volume Down</code> button.
Connect to your PC using a USB Cable
while holding down the <code>Volume Down</code>
button on your Xperia S. Your phone
should be in Flash Mode now and the
device’s LED light should turn into Green.</p>
<h2 id="flash-the-image-using-flashtool">Flash the image using flashtool</h2>
<p>With your phone in flashmode, open Flashtool
and then click on the lightning bolt.
Select the folder where you have the
firmware image <code>LT26i_6.2.B.1.96_World.ftf</code>
(downloaded from the prerequisites package).
Check all the items from wipe list and
uncheck all items from exlude list.
Click flash and wait until finish.</p>
<p>Reboot the phone and you should have
Android 4.1 Jelly Bean installed and ready to use,
but we still want to install Android 6.1,
so the first part of the tutorial is finished.</p>
<hr />
<h1 id="part-2">Part 2</h1>
<p>After having a working installation of Android
in our already un-bricked phone,
the next steps allow to upgrade the stantard
ROM to Oneofakind Android 6.0.1_r10.</p>
<h2 id="unlock-the-phone-boot-loader">Unlock the phone boot loader</h2>
<p>Follow the instructions from the official
<a href="http://developer.sonymobile.com/unlockbootloader/unlock-yourboot-loader/">Sony website</a></p>
<h2 id="gain-root-access-to-the-phone">Gain root access to the phone.</h2>
<p>First in your mobile phone.</p>
<ul>
<li>
<p>Activate USB Debugging, Setting -> Developer Options</p>
</li>
<li>
<p>Activate Unknown Sources, Setting -> Security</p>
</li>
</ul>
<p>Now open the application <code>RunMe.bat</code> within
Root_with_Restore_by_Bin4ry_v36
and select option 2. Your phone should be rooted and rebooted.</p>
<h2 id="installing-twrp">Installing TWRP</h2>
<p>Switch off your Xperia S first.
Press and hold the <code>Volume Up</code> button.
Connect to your PC using a USB Cable while
holding down the <code>Volume Up</code> button on your Xperia S.
Your Xperia S should be in Fast Mode now and the
device’s LED light should turn into Blue. Then run:</p>
<pre><code>fastboot flash boot twrp-3.0.2-0-nozomi.img
</code></pre>
<p>At this point you should have the recovery installed.</p>
<h2 id="re-partitioning">Re-partitioning</h2>
<p>Let’s start TWRP. Turn on the phone and the the Sony
screen appears press the <code>Volume Up</code> button.</p>
<p>Go to Mount on TWRP gui (uncheck system, data, cache)
Open the terminal and run:</p>
<pre><code>fdisk -l /dev/block/mmcblk0
</code></pre>
<p>Copy the output of the command to a file with your backup.</p>
<p>Interesting parts are the following lines:</p>
<pre><code>/dev/block/mmcblk0p14 42945 261695 7000024 83 Linux
/dev/block/mmcblk0p15 261696 954240 22161424 83 Linux
</code></pre>
<p>It can be not exactly the same values for you depending
the size of your /data (p14) and /sdcard (p15), run:</p>
<pre><code>fdisk /dev/block/mmcblk0
</code></pre>
<p>The following steps will merge p14 and 15 into one big partition for data,
do this or otherwise you won’t be able to use the internal SD.</p>
<pre><code>Command (m for help): p
Command (m for help): d
Partition number (1-15): 15
Command (m for help): d
Partition number (1-14): 14
Command (m for help): n
First cylinder (769-954240, default 769): 42945
Last cylinder or +size or +sizeM or +sizeK (42945-954240, default 954240): (just press enter if the default value is the good one)
Using default value 954240
Command (m for help): t
Partition number (1-14): 14
Hex code (type L to list codes): 83
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table
</code></pre>
<p>Once re-partitioning done, do NOT do anything
else and just reboot the device (to be sure
that the partition table is take into account by the kernel)</p>
<p>Now we will convert /data and /cache to F2FS.
Ext4 is not supported anymore on nAOSProm. You don’t
need to take care about the 16384 byte to reserve
for encryption. TWRP will do it for you.</p>
<h2 id="install-and-run-twrp-again">Install and run TWRP again</h2>
<p>Switch off your Xperia S first.
Press and hold the <code>Volume Up</code> button
Connect to your PC using a USB Cable while holding down the <code>Volume Up</code> button on your Xperia S
Your phone should be in Fast Mode now and the device’s LED light should turn into Blue, then run</p>
<pre><code>fastboot flash boot twrp-3.0.2-0-nozomi.img
</code></pre>
<p>Let’s start TWRP. Turn on the phone and the the Sony
screen appears press the <code>Volume Up</code> button.</p>
<p>Mount all the partitions.</p>
<p>From the menu click:</p>
<ul>
<li>
<p>Wipe -> Advanced Wipe -> select Data -> Repair or Change File system -> Change File System -> F2FS -> Swipe to Change</p>
</li>
<li>
<p>Wipe -> Advanced Wipe -> select Cache -> Repair or Change File system -> Change File System -> F2FS -> Swipe to Change</p>
</li>
</ul>
<p>Once done, again, do NOT do anything else and just reboot the device (required by TWRP).</p>
<p>Congratulation, if everything is fine you should be able to mount /cache and /data and to see a big /data volume around 28 GiB.</p>
<p>Boot the phone again in recovery mode (TWRP).</p>
<p>Start TWRP, mount all partitions and from your PC run:</p>
<pre><code>adb push open_gapps-arm-6.0-pico-20161006.zip /sdcard/
adb push oneofakind_nozomi-27-Jan-2016.zip /sdcard/
</code></pre>
<p>This will load those files into your smartphone allowing
the installation of Oneofakind.</p>
<h2 id="final-installation-of-oneofakind">Final installation of Oneofakind….</h2>
<p>From TWRP menu click install and select the two images that
we just have uploaded (oneofakind_nozomi-27-Jan-2016.zip and open_gapps-arm-6.0-pico-20161006.zip).</p>
<p>Click flash and wait until the installation is complete, in the mean while
open the beer (Last prerequisite) and drink it.</p>
<p>Cheers!</p>
Deployment tips for puppet-tripleo changesThis post will describe different ways of debugging puppet-tripleo changes. Deploying puppet-tripleo using gerrit patches or source code repositories In some cases, dependencies should be merged first in order to test newer patches when adding new features to THT. With...2016-09-29T09:00:00+00:00https://www.pubstack.com/blog/2016/09/29/tripleo-debugging-tipsCarlos Camacho<p>This post will describe different ways of debugging puppet-tripleo changes.</p>
<h2 id="deploying-puppet-tripleo-using-gerrit-patches-or-source-code-repositories">Deploying puppet-tripleo using gerrit patches or source code repositories</h2>
<p>In some cases, dependencies should be merged first in order to test newer
patches when adding new features to THT. With the following procedure, the user
will be able to create the overcloud images using WorkInProgress patches from
gerrit code review without having them merged (for CI testing purposes).</p>
<p>If using third party repos included in the overcloud image, like i.e. the
puppet-tripleo repository, your changes will not be available by default in the
overcloud until you write them in the overcloud image (by default is:
overcloud-full.qcow2)</p>
<p>In order to make <del>quick</del> changes to the overcloud image for testing purposes, you
can:</p>
<p>Export the paths to your submission by following an
<a href="http://tripleo.org/developer/in_progress_review.html">In-Progress review</a>:</p>
<pre><code class="language-bash"> export DIB_INSTALLTYPE_puppet_tripleo=source
export DIB_REPOLOCATION_puppet_tripleo=https://review.openstack.org/openstack/puppet-tripleo
export DIB_REPOREF_puppet_tripleo=refs/changes/25/310725/14
</code></pre>
<p>In order to avoid noise on IRC, it is possible to clone puppet-tripleo and
apply the changes from your github account. In some cases this is particularly
useful as there is no need to update the patchset number.</p>
<pre><code class="language-bash"> export DIB_INSTALLTYPE_puppet_tripleo=source
export DIB_REPOLOCATION_puppet_tripleo=https://github.com/<usergoeshere>/puppet-tripleo
</code></pre>
<p>Remove previously created images from glance and from the user home folder by:</p>
<pre><code class="language-bash"> rm -rf /home/stack/overcloud-full.*
glance image-delete overcloud-full
glance image-delete overcloud-full-initrd
glance image-delete overcloud-full-vmlinuz
</code></pre>
<p>After this step the images can be recreated by executing:</p>
<pre><code class="language-bash"> ./tripleo-ci/scripts/tripleo.sh --overcloud-images
</code></pre>
<h2 id="debugging-puppet-tripleo-from-overcloud-images">Debugging puppet-tripleo from overcloud images</h2>
<p>For debugging purposes, it is possible to mount the overcloud .qcow2 file:</p>
<pre><code class="language-bash"> #Install the libguest tool:
sudo yum install -y libguestfs-tools
#Create a temp folder to mount the overcloud-full image:
mkdir /tmp/overcloud-full
#Mount the image:
guestmount -a overcloud-full.qcow2 -i --rw /tmp/overcloud-full
#Check and validate all the changes to your overcloud image, go to /tmp/overcloud-full:
# For example, in this step you can go to /opt/puppet-modules/tripleo,
#Umount the image
sudo umount /tmp/overcloud-full
</code></pre>
<p>From the mounted image file it is also possible to run, for testing purposes,
the puppet manifests by using <code>puppet apply</code> and including your manifests:</p>
<pre><code class="language-bash"> sudo puppet apply -v --debug --modulepath=/tmp/overcloud-full/opt/stack/puppet-modules -e "include ::tripleo::services::time::ntp"
</code></pre>
Crazy murdering robotsCrazy murdering robots
Explanation: No comments.
Disclaimer.
2016-09-22T00:00:00+00:00https://www.pubstack.com/blog/2016/09/22/podCarlos Camacho<p>Crazy murdering robots</p>
<p><img src="/static/pod/2016-09-22-killing_robot.jpg" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
6 or 96 or 9
Explanation: No comments.
Disclaimer.
2016-09-21T00:00:00+00:00https://www.pubstack.com/blog/2016/09/21/podCarlos Camacho<p>6 or 9</p>
<p><img src="/static/pod/2016-09-21-6-or-9.png" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
So You Think You Are Ready For The RHCSA Exam?I just want to share this amazing blog post from Finnbarr P. Murphy, originally from the Musing of an OS plumber blog. Again, this is awesome! – So you have studied hard, maybe even attended a week or two of...2016-09-18T00:00:00+00:00https://www.pubstack.com/blog/2016/09/18/So-You-Think-You-Are-Ready-For-The-RHCSA-ExamFinnbarr P. Murphy<p>I just want to share this amazing blog post from Finnbarr P. Murphy,
originally from the <a href="http://blog.fpmurphy.com/2016/09/so-you-think-you-are-ready-for-the-rhcsa-exam.html">Musing of an OS plumber</a>
blog.</p>
<p>Again, this is awesome!</p>
<p>–</p>
<p>So you have studied hard, maybe even attended a week or two of formal training,
for the Red Hat Certified System Administrator exam and now you think you are
ready to take the actual examination.</p>
<p>Before you spend your money (currently $400) on the actual examination,
why not download this custom <a href="http://fpmurphy.com/public/RHCSA_SampleTest_1.ova">CentOS 7.2 VM</a>
and attempt a real world test of your RHCSA skills.</p>
<p>This VM, which is in the form of an OVA (Open Virtualization Archive),
will work with VMware Workstation 10 or later. Sorry, but if you want
to use the VM in other environments, you are going to have to figure
out how to do so, no support will be forthcoming from me. You will also
need network access from your host system to the default public CentOS repos.</p>
<p>There are twelve (12) tasks that you need to complete in 90 minutes from VM
power up. Most, if not all, of these tasks will probably appear in the real
exam. But first you must fix a problem booting the operating system and get
past the lost root password before you can read the file /TASKS which contains
the twelve tasks that you most complete. Oh, and by the way, networking and
package management are broken. You will have to get networking and package
management working in order to install some necessary packages.</p>
<p>Just like in the real examination, no answers are or will be provided.</p>
<p>If you cannot correctly complete all the tasks in 120 minutes or less.
You are absolutely NOT ready for the actual RHCSA exam.</p>
<p>Good luck!</p>
Debugging submissions errors in TripleO CILanding upstream submissions might be hard if you are not passing all the CI jobs that try to check that your code actually works. Let’s assume that CI is working properly without any kind of infra issue or without any...2016-08-25T00:00:00+00:00https://www.pubstack.com/blog/2016/08/25/debugging-submissions-errors-in-tripleo-ciCarlos Camacho<p>Landing upstream submissions might be hard if you are
not passing all the CI jobs that try to check that your
code actually works.</p>
<p>Let’s assume that CI is working properly without any kind of
infra issue or without any error introduced by mistake from
other submissions. In which case, we might ending having something
like:</p>
<p><img src="/static/gerrit_failed_jobs.png" alt="" /></p>
<p>The first thing that we should do it’s to double check the
status from all the other jobs that are actually in the TripleO
CI status page. This can be checked in the following site:</p>
<p><a href="http://tripleo.org/cistatus.html">http://tripleo.org/cistatus.html</a></p>
<p>Also, we can get the jobs status by checking the Zuul dashboard.</p>
<p><a href="http://status.openstack.org/zuul/]">http://status.openstack.org/zuul/</a></p>
<p>Or checking the TripleO test cloud nodepool.</p>
<p><a href="http://grafana.openstack.org/dashboard/db/nodepool-tripleo-test-cloud">http://grafana.openstack.org/dashboard/db/nodepool-tripleo-test-cloud</a></p>
<p>After checking that there are jobs passing CI let’s check why our job
it’s not passing correctly.</p>
<p>For each job the folder structure should be similar to:</p>
<pre><code>[TXT] console.html
[DIR] logs/
[DIR] overcloud-cephstorage-0/
[DIR] overcloud-controller-0/
[DIR] overcloud-novacompute-0/
[ ] postci.txt.gz
[DIR] undercloud/
</code></pre>
<p>It’s possible to check the deployment status in the <code>console.html</code> file
there you will see the result of all the deployment steps executed in
order to pass the CI job.</p>
<p>In case of having i.e. a failed deployment you can check <code>postci.txt.gz</code>
to get the actual standard error from the deployment.</p>
<p>Also from folders <code>overcloud-cephstorage-0</code>, <code>overcloud-controller-0</code> and
<code>overcloud-novacompute-0</code> you will have the content of <code>/var</code> that
will point out all the services logs.</p>
<p>Other useful tip might be to get all the job logs folder with wget and
crawl for a string containing the <code>Error</code> word.</p>
<pre><code class="language-bash">#Get the CI job folder i.e. using the following URL.
wget -e robots=off -r --no-parent http://logs.openstack.org/00/000000/0/check-tripleo/gate-tripleo-ci-centos-7-ovb-ha/xxxxxx/
#Parse:
grep -iR "Error: " *
</code></pre>
<p>You will probably see there something pointing out an error, and hopefully
will give you clues about the next steps to fix them and land your submissions
as soon as possible.</p>
BAND-AID for OOM issues with TripleO manual deploymentsThis post will explain how to fix OOM issues whe using TripleO. If running free -m from your Undercloud or Overcloud nodes and getting some output like: [asdf@fdsa]$ free -m total used free shared buff/cache available Mem: 7668 5555 219...2016-08-23T00:00:00+00:00https://www.pubstack.com/blog/2016/08/23/oom-swap-fix-in-tripleoCarlos Camacho<p>This post will explain how to fix OOM issues whe using TripleO.</p>
<p><img src="/static/bandaid.jpg" alt="" /></p>
<p>If running <code>free -m</code> from your Undercloud or Overcloud nodes and
getting some output like:</p>
<pre><code>[asdf@fdsa]$ free -m
total used free shared buff/cache available
Mem: 7668 5555 219 1065 1893 663
</code></pre>
<p>And as in the example there is no reference pointing
to the swap memory size and/or usage, you might not be using swap
in your TripleO deployments, to enable it, just have
to follow two steps.</p>
<p>First in the Undercloud, when deploying stacks you might find
that heat-engine (4 workers) takes lot of RAM, in this
case for specific usage peaks can be useful to have a
swap file. In order to have this swap file enabled and used by the OS
execute the following instructions in the Undercloud:</p>
<pre><code class="language-bash">#Add a 4GB swap file to the Undercloud
sudo dd if=/dev/zero of=/swapfile bs=1024 count=4194304
sudo mkswap /swapfile
#Turn ON the swap file
sudo chmod 600 /swapfile
sudo swapon /swapfile
#Enable it on start
echo "/swapfile swap swap defaults 0 0" | sudo tee -a /etc/fstab
</code></pre>
<p>Also when deploying the Overcloud nodes the controller might face
some RAM usage peaks, in which case, create a swap file in each
Overcloud node by using an already existing “extraconfig swap”
template.</p>
<p>To achieve this second part, we just need to use the environmental
file that loads the swap template in the resource
<a href="https://review.openstack.org/#/c/418273/">registry</a>
when deploying the overcloud.</p>
<p>Now, deploy your Overcloud as usual i.e.:</p>
<pre><code class="language-bash">cd
openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tripleo-heat-templates/environments/enable-swap.yaml \
-e /home/stack/tripleo-heat-templates/environments/puppet-pacemaker.yaml
</code></pre>
<p>Bye bye OOM’s!!!!</p>
TripleO deep dive session #6 (Overcloud - Physical network)This is the sixth video from a series of “Deep Dive” sessions related to TripleO deployments. In this session Dan Prince will dig into the physical overcloud networks. So please, check the full session content on the TripleO YouTube channel....2016-08-15T12:00:00+00:00https://www.pubstack.com/blog/2016/08/15/tripleo-deep-dive-session-6Carlos Camacho<p>This is the sixth video from a series of “Deep Dive” sessions
related to <a href="http://www.tripleo.org/">TripleO</a> deployments.</p>
<p>In this session Dan Prince will dig into the physical overcloud networks.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=zYNq2uT9pfM">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/zYNq2uT9pfM" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
TripleO deep dive session #5 (Undercloud - Under the hood)This is the fifth video from a series of “Deep Dive” sessions related to TripleO deployments. In this session James Slagle and Steven Hardy will dig into some underlying aspects related to the TripleO Undercloud. This video session aims to...2016-08-05T20:00:00+00:00https://www.pubstack.com/blog/2016/08/05/tripleo-deep-dive-session-5Carlos Camacho<p>This is the fifth video from a series of “Deep Dive” sessions
related to <a href="http://www.tripleo.org/">TripleO</a> deployments.</p>
<p>In this session James Slagle and Steven Hardy will dig into
some underlying aspects related to the TripleO Undercloud.</p>
<p>This video session aims to cover the following sections:</p>
<ul>
<li>What is under the hood of a TripleO underloud deployment.</li>
<li>Description of the undercloud components.</li>
<li>Show the undercloud components interaction.</li>
<li>Undercloud installing process.</li>
<li>Undercloud customization.</li>
<li>How to apply and test submissions in instack-undercloud.</li>
</ul>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=h32z6Nq8Byg">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/h32z6Nq8Byg" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
TripleO deep dive session #4 (Puppet modules)This is the fourth video from a series of “Deep Dive” sessions related to TripleO deployments. This session will cover a series of basic Puppet topics related to TripleO deployments. This video session aims to cover the following sections: Introduction...2016-08-01T10:00:00+00:00https://www.pubstack.com/blog/2016/08/01/tripleo-deep-dive-session-4Carlos Camacho<p>This is the fourth video from a series of “Deep Dive” sessions
related to <a href="http://www.tripleo.org/">TripleO</a> deployments.</p>
<p>This session will cover a series of basic Puppet topics related to
TripleO deployments.</p>
<p>This video session aims to cover the following sections:</p>
<ul>
<li>Introduction about Puppet OpenStack modules.</li>
<li>Services deployment using Puppet profiles.</li>
<li>Deployment composability with Heat.</li>
<li>Bring your own service to TripleO.</li>
</ul>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=-b4cdfzvFDY">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/-b4cdfzvFDY)" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
Responsiveness (Graphical description)Responsiveness (Graphical description)
Explanation: No comments.
Disclaimer.
2016-08-01T00:00:00+00:00https://www.pubstack.com/blog/2016/08/01/podCarlos Camacho<p>Responsiveness (Graphical description)</p>
<p><img src="/static/pod/2016-08-01-responsive.png" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
Testing instack-undercloud submissions locallyThis post is to describe how to run/test gerrit submissions related to instack-undercloud locally. For this example I’m going to use this submission: https://review.openstack.org/#/c/347389/ The follwing steps allow to test the submissions related to instack-undercloud in a working environment. ./tripleo-ci/scripts/tripleo.sh...2016-07-26T00:00:00+00:00https://www.pubstack.com/blog/2016/07/26/testing-tripleo-undercloud-gerrit-submissionCarlos Camacho<p>This post is to describe how to run/test gerrit submissions related to instack-undercloud locally.</p>
<p>For this example I’m going to use this submission: https://review.openstack.org/#/c/347389/</p>
<p>The follwing steps allow to test the submissions related to instack-undercloud
in a working environment.</p>
<pre><code class="language-bash"> ./tripleo-ci/scripts/tripleo.sh --delorean-setup
./tripleo-ci/scripts/tripleo.sh --delorean-build openstack/instack-undercloud
cd tripleo/instack-undercloud/
#The submission to be tested
git review -d 347389
cd
./tripleo-ci/scripts/tripleo.sh --delorean-build openstack/instack-undercloud
rpm -qa | grep instack-undercloud
sudo rpm -e --nodeps <old_installed_instack-undercloud>
find tripleo/ -name "*rpm"
sudo rpm -iv --replacepkgs --force <located package>
#Here we need to check that the changes were actually applied.
#What I'm used to do it's to search the updated files using locate
#and manually checking that the changes are OK.
sudo rm -rf /root/.cache/image-create/source-repositories/*
sudo rm -rf /opt/stack/puppet-modules
</code></pre>
<p>Now, in case that a puppet-tripleo change is needed, you can add the env. vars before
re-installing the undercloud.</p>
<pre><code class="language-bash"> export DIB_INSTALLTYPE_puppet_tripleo=source
export DIB_REPOLOCATION_puppet_tripleo=https://review.openstack.org/openstack/puppet-tripleo
export DIB_REPOREF_puppet_tripleo=refs/changes/XX/XXXXX/X
</code></pre>
<p>Now, we just need to run the installer.</p>
<pre><code class="language-bash"> ./tripleo-ci/scripts/tripleo.sh --undercloud
</code></pre>
<p>Once this process completes, the output should be something similar to:</p>
<pre><code class="language-text">
#################
tripleo.sh -- Undercloud install - DONE.
#################
</code></pre>
Monday's coffee ensure => 'present'Monday’s coffee ensure => ‘present’
Explanation: No comments.
Disclaimer.
2016-07-25T00:00:00+00:00https://www.pubstack.com/blog/2016/07/25/podCarlos Camacho<p>Monday’s coffee ensure => ‘present’</p>
<p><img src="/static/pod/2016-07-25-no-coffee-no-brain.png" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
TripleO deep dive session #3 (Overcloud deployment debugging)This is the third video from a series of “Deep Dive” sessions related to TripleO deployments. This session is related to how to troubleshoot a failed THT deployment. This video session aims to cover the following topics: Debug a TripleO...2016-07-22T00:00:00+00:00https://www.pubstack.com/blog/2016/07/22/tripleo-deep-dive-session-3Carlos Camacho<p>This is the third video from a series of “Deep Dive” sessions
related to <a href="http://www.tripleo.org/">TripleO</a> deployments.</p>
<p>This session is related to how to troubleshoot a
failed THT deployment.</p>
<p>This video session aims to cover the following topics:</p>
<ul>
<li>Debug a TripleO failed overcloud deployment.</li>
<li>Debugging in real time the deployed resources.</li>
<li>Basic Openstack commands to see the deployment status.</li>
</ul>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=fspnjD-1DNI">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/fspnjD-1DNI" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
Climbing day in patonesClimbing day in patones
Explanation: No comments.
Disclaimer.
2016-07-22T00:00:00+00:00https://www.pubstack.com/blog/2016/07/22/podCarlos Camacho<p>Climbing day in patones</p>
<p><img src="/static/pod/2016-07-22-climbing-patones.jpg" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
When the shower has more commits than youWhen the shower has more commits than you”
Explanation: No comments.
Disclaimer.
2016-07-21T00:00:00+00:00https://www.pubstack.com/blog/2016/07/21/podCarlos Camacho<p>When the shower has more commits than you”</p>
<p><img src="/static/pod/2016-07-21-shower-more-commits-than-you.png" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
Timeouts day..Timeouts day..
Explanation: No comments.
Disclaimer.
2016-07-20T00:00:00+00:00https://www.pubstack.com/blog/2016/07/20/podCarlos Camacho<p>Timeouts day..</p>
<p><img src="/static/pod/2016-07-20-owl-wet-face.jpg" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
Happy Tuesday!Happy Tuesday!
Explanation: No comments.
Disclaimer.
2016-07-19T00:00:00+00:00https://www.pubstack.com/blog/2016/07/19/podCarlos Camacho<p>Happy Tuesday!</p>
<p><img src="/static/pod/2016-07-19-owl-happy.jpg" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
TripleO deep dive session #2 (TripleO Heat Templates)This is the second video from a series of “Deep Dive” sessions related to TripleO deployments. This session is related to a THT overview for all users who want to dig into the project. This video session aims to cover...2016-07-18T00:00:00+00:00https://www.pubstack.com/blog/2016/07/18/tripleo-deep-dive-session-2Carlos Camacho<p>This is the second video from a series of “Deep Dive” sessions
related to <a href="http://www.tripleo.org/">TripleO</a> deployments.</p>
<p>This session is related to a THT overview
for all users who want to dig into the
project.</p>
<p>This video session aims to cover the following topics:</p>
<ul>
<li>A THT basic introduction overview.</li>
<li>A Template model used.</li>
<li>A description of the new composable services approach.</li>
<li>A code overview over the related code repositories.</li>
<li>A cloud deployment demo session.</li>
<li>A demo session with a deployment in live referring to debugging hints.</li>
</ul>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=gX5AKSqRCiU">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/gX5AKSqRCiU" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
The chapter 1The chapter 1
Explanation: No comments.
Disclaimer.
2016-07-18T00:00:00+00:00https://www.pubstack.com/blog/2016/07/18/podCarlos Camacho<p>The chapter 1</p>
<p><img src="/static/pod/2016-07-18-sleep.gif" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
PointersPointers
Explanation: No comments.
Disclaimer.
2016-07-15T00:00:00+00:00https://www.pubstack.com/blog/2016/07/15/podCarlos Camacho<p>Pointers</p>
<p><img src="/static/pod/2016-07-15-pointers.png" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
CompilingCompiling
Explanation: No comments.
Disclaimer.
2016-07-14T00:00:00+00:00https://www.pubstack.com/blog/2016/07/14/podCarlos Camacho<p>Compiling</p>
<p><img src="/static/pod/2016-07-14-compiling.png" alt="" /></p>
<p>Explanation: No comments.
<br /><a href="https://www.pubstack.com/disclaimer">Disclaimer.</a></p>
TripleO deep dive session #1 (Quickstart deployment)This is the first video from a series of “Deep Dive” sessions related to TripleO deployments. The first session is related to the TripleO deployment using Quickstart. Quickstart comes from RDO, to reduce the complexity of having a TripleO environment...2016-07-11T00:00:00+00:00https://www.pubstack.com/blog/2016/07/11/tripleo-deep-dive-session-1Carlos Camacho<p>This is the first video from a series of “Deep Dive” sessions
related to <a href="http://www.tripleo.org/">TripleO</a> deployments.</p>
<p>The first session is related to the TripleO deployment using
Quickstart.</p>
<p>Quickstart comes from <a href="http://www.rdoproject.org/">RDO</a>, to reduce the complexity of having
a TripleO environment quickly, mostly for users without a strong
and deep knowledge of TripleO configuration and it uses Ansible roles
to automate all the different configuration tasks.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=E1d_RmysnA8">session</a> content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/E1d_RmysnA8" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Last but not least, James Slagle (slagle) have posted some comments about
how to apply new changes in the puppet modules when deploying the overcloud
as the current task of re-create them is a time consuming and cumbersome process.</p>
<p>Using the upload-puppet-modules script we will be able to update the puppet
modules when executing the overcloud deployment.</p>
<pre><code class="language-bash"># From the undercloud
mkdir puppet-modules
cd puppet-modules
git clone https://git.openstack.org/openstack/puppet-tripleo tripleo
# Edit as needed under the tripleo folder
cd
git clone https://git.openstack.org/openstack/tripleo-common
export PATH="$PATH:tripleo-common/scripts"
upload-puppet-modules --directory puppet-modules/
</code></pre>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
<pre><code class="language-text">---------------------------------------------------------------------------------------
| , . , |
| )-_'''_-( |
| ./ o\ /o \. |
| . \__/ \__/ . |
| ... V ... |
| ... - - - ... |
| . - - . |
| `-.....-´ |
| _______ _ _ ____ |
| |__ __| (_) | | / __ \ |
| | |_ __ _ _ __ | | ___| | | | |
| | | '__| | '_ \| |/ _ \ | | | |
| | | | | | |_) | | __/ |__| | |
| _____ |_|_| |_| .__/|_|\___|\____/ |
| | __ \ | | | __ \(_) |
| | | | | ___ ___|_|__ | | | |___ _____ |
| | | | |/ _ \/ _ \ '_ \ | | | | \ \ / / _ \ |
| | |__| | __/ __/ |_) | | |__| | |\ V / __/ |
| |_____/ \___|\___| .__/_ |_____/|_| \_/ \___| |
| | | (_) |
| ___ ___ __|_|__ _ ___ _ __ ___ |
| / __|/ _ \/ __/ __| |/ _ \| '_ \/ __| |
| \__ \ __/\__ \__ \ | (_) | | | \__ \ |
| |___/\___||___/___/_|\___/|_| |_|___/ |
| |
---------------------------------------------------------------------------------------
</code></pre>
The Venezuelan CuatroLet me present you a Venezuelan Cuatro. This magical device is a string (nylon) musical instrument tuned (ad’f#’b) typical from Venezuela, similar to the Ukelele in it’s shape, but their character and playing technique are vastly different. If you want...2016-07-07T00:00:00+00:00https://www.pubstack.com/blog/2016/07/07/venezuelan-cuatroCarlos Camacho<p>Let me present you a Venezuelan Cuatro.</p>
<p><img src="/static/cuatro/cuatro.jpg" alt="" /></p>
<p>This magical device is a string (nylon) musical instrument
tuned (ad’f#’b) typical from Venezuela, similar
to the Ukelele in it’s shape, but their character
and playing technique are vastly different.</p>
<p>If you want to learn more about this instrument,
you can download a complete <a href="/static/cuatro/cuatro-chords.pdf">chords sheet</a>
from The Ukulele Orchestra of Great Britain or
this <a href="/static/cuatro/cuatro-method.pdf">Cuatro method</a>.</p>
<p>Also you can listen these excellent videos from Youtube.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/NZ123ysut9s" frameborder="0" allowfullscreen=""></iframe>
</div>
<p><br /></p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/3JqMTEr1HIg" frameborder="0" allowfullscreen=""></iframe>
</div>
Openstack & TripleO deployment using Inlunch - DEPRECATEDToday I’m going to speak about the first Openstack installer I used to deploy TripleO. Inlunch, as its name aims it should make you “Get an Instack environment prepared for you while you head out for lunch.” The steps that...2016-07-07T00:00:00+00:00https://www.pubstack.com/blog/2016/07/07/tripleo-deployment-with-inluchCarlos Camacho<p>Today I’m going to speak about the first
Openstack installer I used to deploy <a href="http://www.tripleo.org">TripleO</a>.
<a href="https://github.com/jistr/inlunch">Inlunch</a>,
as its name aims it should make you
“Get an Instack environment prepared for you while
you head out for lunch.”</p>
<p>The steps that I’m used to run are:</p>
<ul>
<li>Connect to your remote server (Your physical server) as
root and generate the id_rsa.pub file and append it to
the authorized_keys file.</li>
</ul>
<pre><code class="language-bash">ssh-keygen -t rsa
cd .ssh
cat id_rsa.pub >> authorized_keys
</code></pre>
<ul>
<li>Install some dependencies and clone <a href="https://github.com/jistr/inlunch">Inlunch</a>.</li>
</ul>
<pre><code class="language-bash">rpm -iUvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-7.noarch.rpm
sudo yum -y install git ansible nano
git clone https://github.com/jistr/inlunch
</code></pre>
<ul>
<li>Go to the inlunch folder and edit the answer
file to fits your needs. In the answers files
by default it creates 6 nodes with 5GB RAM each,
I usually change this to 3 nodes with 8GB RAM.</li>
</ul>
<pre><code class="language-bash">cd inlunch
vi answers.yml.example
</code></pre>
<ul>
<li>The last but not least <a href="https://github.com/jistr/inlunch">Inlunch</a>
step it is to deploy
our undercloud!!! As simple as it sounds. As you can
see <a href="https://github.com/jistr/inlunch">Inlunch</a> uses
<a href="http://www.ansible.com/">Ansible</a> for all the steps automation
using SSH.
In this case we added the root public key to the same server
and the installation is pointed to localhost.</li>
</ul>
<pre><code class="language-bash">INLUNCH_ANSWERS=answers.yml.example INLUNCH_FQDN=localhost ./instack-virt.sh
</code></pre>
<ul>
<li>Once you have finished this last step you can
login in the undercloud node by sshing to the physical
server using the 2200 port.</li>
</ul>
<pre><code class="language-bash">ssh -p 2200 root@<your_server_fqdn_goes_here>
</code></pre>
<p>This is it :) your undercloud it is up and running.</p>
<p>Now, I will show the following steps to deploy
the master branch of tripleo-heat-templates to
finish the overcloud deployment.</p>
<ul>
<li>Login as the stack user and source the stackrc file</li>
</ul>
<pre><code class="language-bash">su - stack
source stackrc
</code></pre>
<ul>
<li>Let’s clone all needed repositories.</li>
</ul>
<pre><code class="language-bash">git clone https://github.com/openstack/puppet-tripleo
git clone https://github.com/openstack/tripleo-docs
git clone https://github.com/openstack/tripleo-heat-templates
</code></pre>
<ul>
<li>And to finish let’s deploy the <a href="http://www.tripleo.org">TripleO</a> pacemaker environment.</li>
</ul>
<pre><code class="language-bash"> openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server clock.redhat.com \
--templates /home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tripleo-heat-templates/environments/puppet-pacemaker.yaml
</code></pre>
<p>Now you should have deployed successfully your undercloud/overcloud environment using <a href="https://github.com/jistr/inlunch">Inlunch</a>.</p>
<p>Thanks <a href="https://github.com/jistr/">Jiri</a> for this amazing installer!!</p>
Connecting the ADXL345 accelerometer to the Raspberry Pi 3Since a few months I’m kind of interested in research topics related to displacement calculation based on time series acceleration. This basically shows that when something starts moving it displacement depends on the velocity and the time that it has...2016-07-05T00:00:00+00:00https://www.pubstack.com/blog/2016/07/05/accelerometer-introCarlos Camacho<p>Since a few months I’m kind of interested
in research topics related to displacement
calculation based on time series acceleration.
This basically shows that when something starts
moving it displacement depends on the velocity
and the time that it has been moving, but due
to the fact that the velocity changes continuously
we take this variation as the object acceleration
(meters per second per second).
In this case we will use the accelerometer to estimate
the relative position to a point p, which is derived
using the double integration of the acceleration metrics.</p>
<p>Until now we have no speed, time or acceleration metrics so
we are going to start from the very beginning.</p>
<p>The items needed for this project are;
<a href="https://www.amazon.es/gp/product/B005I4QCB4">Solder</a>,
<a href="https://www.amazon.es/gp/product/B001BMSBD4">Solder support</a>,
<a href="https://www.amazon.es/gp/product/B000LFTN1G">Solder wire</a>,
<a href="https://www.amazon.es/gp/product/B00CIOVF8W">Flux</a>,
<a href="https://www.amazon.es/gp/product/B0144HG2RE">Jumpers kit</a>,
<a href="https://www.amazon.es/gp/product/B01CD5VC92">Raspberry Pi 3</a>,
<a href="https://www.amazon.es/gp/product/B00W7S1BFG">Raspberry Pi 3 case</a>,
<a href="https://www.amazon.es/gp/product/B0144HFO0A">GPIO board</a> and the
<a href="https://www.amazon.es/gp/product/B0151FIBZO">ADXL345 sensor</a>.</p>
<p>Total budged for this project: 105.73 Eur.</p>
<p>First let’s connect our ADXL345 accelerometer to the Raspberry by connecting
the jumpers as follow:</p>
<table>
<thead>
<tr>
<th>Raspberry GPIO pin</th>
<th>ADXL345 pin</th>
</tr>
</thead>
<tbody>
<tr>
<td>GND</td>
<td>GND</td>
</tr>
<tr>
<td>3V</td>
<td>3V</td>
</tr>
<tr>
<td>SDA</td>
<td>SDA</td>
</tr>
<tr>
<td>SCL</td>
<td>SCL</td>
</tr>
</tbody>
</table>
<p>This will lead to something like:
<img src="/static/accel/accelerometer-01-build.jpeg" alt="" /></p>
<p>Or a wider view:
<img src="/static/accel/accelerometer-00-build.jpeg" alt="" /></p>
<p>Once having the Raspberry wired, let’s configure it by the following steps.</p>
<ul>
<li>From our Raspberry Pi, install:</li>
</ul>
<pre><code class="language-bash">sudo apt-get install python-smbus i2c-tools
</code></pre>
<ul>
<li>Enable the I2C kernel module in the Raspberry Pi:</li>
</ul>
<pre><code class="language-bash">sudo raspi-config
</code></pre>
<p>Now, enable I2C kernel module in:
Advanced Options -> I2C -> Would you like the ARM module.. -> Would you like it enabled by default..</p>
<ul>
<li>Edit the modules file (sudo vim /etc/modules) and make sure it contains the following lines:</li>
</ul>
<pre><code class="language-bash">i2c-bcm2708
i2c-dev
</code></pre>
<ul>
<li>Remove I2C from the blacklist file (/etc/modprobe.d/raspi-blacklist.conf)
commenting the following line if it appears:</li>
</ul>
<pre><code class="language-bash">#blacklist i2c-bcm2708
</code></pre>
<ul>
<li>After all these previous steps, reboot the Raspberry Pi</li>
</ul>
<pre><code class="language-bash">sudo reboot
</code></pre>
<ul>
<li>Test the connection to the I2C module:</li>
</ul>
<pre><code class="language-bash">sudo i2cdetect -y 1
</code></pre>
<p>The command should print the following output:
<img src="/static/accel/accelerometer-02-port-test.png" alt="" /></p>
<p>Now, when having the module working properly, we need to get and use
the python ADXL345 library to have access to the time based data.</p>
<ul>
<li>Clone the library repository and execute the example code:</li>
</ul>
<pre><code class="language-bash">git clone https://github.com/pimoroni/adxl345-python
cd adxl345-python
sudo python example.py
</code></pre>
<p>The command’s output should be:
<img src="/static/accel/accelerometer-03-g-test.png" alt="" /></p>
<p>Showing the G forces in each axis.</p>
<p>This is it for the first part of the tutorial.
Next posts will dig into the real-time data processing.</p>
TripleO manual deployment - DEPRECATEDThis is a brief recipe about how to manually install TripleO in a remote 32GB RAM box. From the hypervisor run: #In this dev. env. /var is only 50GB, so I will create #a sym link to another location with...2016-07-04T00:00:00+00:00https://www.pubstack.com/blog/2016/07/04/manually-installing-tripleo-recipeCarlos Camacho<p>This is a brief recipe about how to
manually install TripleO in a remote
32GB RAM box.</p>
<p>From the hypervisor run:</p>
<pre><code class="language-bash"> #In this dev. env. /var is only 50GB, so I will create
#a sym link to another location with more capacity.
#It will take easily more tan 50GB deploying a 3+1 overcloud
sudo mkdir -p /home/libvirt/
sudo ln -sf /home/libvirt/ /var/lib/libvirt
#Add default stack user
sudo useradd stack
echo "stack:stack" | chpasswd
echo "stack ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/stack
sudo chmod 0440 /etc/sudoers.d/stack
su - stack
sudo yum -y install epel-release
sudo yum -y install yum-plugin-priorities
export TRIPLEO_ROOT=/home/stack
export TRIPLEO_RELEASE=rdo-trunk-master-tripleo
#export TRIPLEO_RELEASE=rdo-trunk-newton-tested
export TRIPLEO_RELEASE_DEPS=centos7
#export TRIPLEO_RELEASE_DEPS=centos7-newton
#Repository configured pointing to above release!
sudo curl -o /etc/yum.repos.d/delorean.repo https://buildlogs.centos.org/centos/7/cloud/x86_64/$TRIPLEO_RELEASE/delorean.repo
sudo curl -o /etc/yum.repos.d/delorean-deps.repo https://trunk.rdoproject.org/$TRIPLEO_RELEASE_DEPS/delorean-deps.repo
#Configure the undercloud deployment
export NODE_DIST=centos7
export NODE_CPU=4
export NODE_MEM=9000
export NODE_COUNT=6
export UNDERCLOUD_NODE_CPU=4
export UNDERCLOUD_NODE_MEM=9000
export FS_TYPE=ext4
sudo yum install -y instack-undercloud
instack-virt-setup
</code></pre>
<p>In the hypervisor run the following command to log-in in
the undercloud:</p>
<pre><code class="language-bash"> ssh root@`sudo virsh domifaddr instack | grep $(tripleo get-vm-mac instack) | awk '{print $4}' | sed 's/\/.*$//'`
</code></pre>
<p>From the undercloud we will install all the
packages:</p>
<pre><code class="language-bash"> #Add a 4GB swap file to the Undercloud
sudo dd if=/dev/zero of=/swapfile bs=1024 count=4194304
sudo mkswap /swapfile
#Turn ON the swap file
sudo chmod 600 /swapfile
sudo swapon /swapfile
#Enable it on start
sudo echo "/swapfile swap swap defaults 0 0" >> /etc/fstab
#Login as the stack user
su - stack
export TRIPLEO_ROOT=/home/stack
sudo yum -y install yum-plugin-priorities
export TRIPLEO_RELEASE=rdo-trunk-master-tripleo
#export TRIPLEO_RELEASE=rdo-trunk-newton-tested
export TRIPLEO_RELEASE_BRANCH=master
#export TRIPLEO_RELEASE_BRANCH=stable/newton
export USE_DELOREAN_TRUNK=1
export DELOREAN_TRUNK_REPO="https://buildlogs.centos.org/centos/7/cloud/x86_64/$TRIPLEO_RELEASE/"
export DELOREAN_REPO_FILE="delorean.repo"
export FS_TYPE=ext4
git clone -b $TRIPLEO_RELEASE_BRANCH https://github.com/openstack/tripleo-heat-templates
git clone https://github.com/openstack-infra/tripleo-ci.git
./tripleo-ci/scripts/tripleo.sh --all
# The last command will execute:
# repo_setup --repo-setup
# undercloud --undercloud
# overcloud_images --overcloud-images
# register_nodes --register-nodes
# introspect_nodes --introspect-nodes
# overcloud_deploy --overcloud-deploy
</code></pre>
<p>Once the undercloud it is fully installed, deploy an overcloud
(The last command should have created an overcloud, this is
needed if you need to deploy another one).</p>
<pre><code class="language-bash">cd
openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tripleo-heat-templates/environments/puppet-pacemaker.yaml
#Also can be added:
#--control-scale 3 \
#--compute-scale 3 \
#--ceph-storage-scale 1 -e /home/stack/tripleo-heat-templates/environments/storage-environment.yaml
</code></pre>
<p>This will hopefully deploy the TripleO overcloud, if not,
refer to the <a href="http://tripleo.org/troubleshooting/troubleshooting.html">troubleshooting</a> section in the official
site.</p>
<pre><code class="language-bash">#Configure a DNS for the OC subnet, do this before deploying the Overcloud
neutron subnet-update `neutron subnet-list -f value | awk '{print $1}'` --dns-nameserver 192.168.122.1
</code></pre>
<div style="font-size:10px">
<blockquote>
<p><strong>Updated 2017/02/23:</strong> instack-virt-setup is deprecatred :( moving to tripleo-quickstart.</p>
<p><strong>Updated 2016/11/25:</strong> instack-virt-setup env. vars. are defaulted to sane defaults, so they are optional now.</p>
</blockquote>
</div>
Connecting from your local machine to the TripleO overcloud horizon dashboardThis will be my first blog post about TripleO deployments. The goal of this post is to show how to chain multiple ssh tunnels to browse into the horizon dashboard, deployed in a TripleO environment. In this case, we have...2016-07-02T00:00:00+00:00https://www.pubstack.com/blog/2016/07/02/ssh-multi-hop-tripleoCarlos Camacho<p>This will be my first blog post about TripleO deployments.</p>
<p>The goal of this post is to show how to chain multiple ssh
tunnels to browse into the horizon dashboard, deployed
in a TripleO environment.</p>
<p>In this case, we have deployed <a href="http://www.tripleo.org/">TripleO</a>
in a remote server “labserver” in which was launched an undercloud
and overcloud deployment.</p>
<p>The horizon dashboard listens on the 80 port of the overcloud controller.
From the user terminal we want to have access to the horizon dashboard, currently
unreachable because we don’t have access to the deployed private IPs from the
user’s terminal.</p>
<p>Below is a graphical representation about the described scenario.
<img src="/static/multi-hop.png" alt="" /></p>
<h2 id="steps">STEPS</h2>
<ul>
<li>Connect the local terminal to labserver (create the first tunnel)</li>
</ul>
<pre><code class="language-bash">#Forward incoming 38080 traffic to local 38080 in the hypervisor
#labserver must be a reachable host
ssh -L 38080:localhost:38080 root@labserver
</code></pre>
<ul>
<li>Connect to the undercloud from the labserver (create the second tunnel)</li>
</ul>
<pre><code class="language-bash">#Log-in as the stack user and get the undercloud IP
su - stack
undercloudIp=`sudo virsh domifaddr instack | grep $(tripleo get-vm-mac instack) | awk '{print $4}' | sed 's/\/.*$//'`
#Forward incoming 38080 traffic to local 38080 in the undercloud
ssh -L 38080:localhost:38080 root@$undercloudIp
</code></pre>
<ul>
<li>Get the admin password for the Horizon dashboard</li>
</ul>
<pre><code class="language-bash">su - stack
source stackrc
cat overcloudrc |grep OS_PASSWORD | awk -F '=' '{print $2}'
</code></pre>
<ul>
<li>Connect to the overcloud controller from the undercloud (create the third and last tunnel)</li>
</ul>
<pre><code class="language-bash">#Get the controller IP
controllerIp=`nova list | grep controller | awk -F '|' '{print $7}' | awk -F '=' '{print $2}'`
#Forward incoming 38080 traffic to controller IP in 80 port
ssh -L 38080:"$controllerIp":80 heat-admin@"$controllerIp"
echo "From your browser open: http://localhost:38080/"
</code></pre>
<p>Now, if everything went as expected you should be able to see
the horizon dashboard by using your favorite browser and typing http://localhost:38080/dashboard,<br />
then use the admin user and the password printed before.</p>
<p>Note that in this case, by default the ssh hops should be done using different users.</p>