Pubstack Blog - TripleO related articles2023-10-06T13:39:11+00:00https://www.pubstack.com/Carlos CamachoTripleO deep dive session #14 (Containerized deployments without paunch)2020-02-18T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2020/02/18/tripleo-deep-dive-session-14<p>This is the 14th release of the <a href="http://www.tripleo.org/">TripleO</a>
“Deep Dive” sessions</p>
<p>Thanks to <a href="http://my1.fr/blog">Emilien Macchi</a>
for this deep dive session about the status of the containerized deployment without Paunch.</p>
<p>You can access the <a href="https://docs.google.com/presentation/d/1dndHde25r8MPSdakLp9y5ztL3d6jXmCy-JfhIc6bJbo/edit">presentation</a>.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=D18RaSBGyQU">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/D18RaSBGyQU" frameborder="0" allowfullscreen=""></iframe>
</div>
<p><br />
<br /></p>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a>
to have access to all available content.</p>
Kubebox - Tray2019-05-22T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2019/05/22/kubebox-01-tray<p>The components tray is the enclosure part allowing the user
to allocate mother boards, GPUS, FPGAs, Disks arrays.</p>
<p>Up to 8 trays in the same enclosure.</p>
<div id="kubebox_tray.glb" style="height: 100%; height:400px; display: flex; align-items: center; justify-content: center;"></div>
<script>
init("static/kubebox/", "kubebox_tray.glb");
</script>
<p>Frontal view of the generic tray.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<div style="float: right; width: 400px; background: white;"><img width="400px" src="/static/kubebox/kubebox_tray_00.png" alt="" style="border:15px solid #FFF" /></div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<div style="float: left; width: 100px; background: white;"><img width="100px" src="/static/kubebox/kubebox_tray_01.png" alt="" style="border:15px solid #FFF" /></div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<div style="float: right; width: 400px; background: white;"><img width="400px" src="/static/kubebox/kubebox_tray_02.png" alt="" style="border:15px solid #FFF" /></div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<div style="float: left; width: 100px; background: white;"><img width="100px" src="/static/kubebox/kubebox_tray_03.png" alt="" style="border:15px solid #FFF" /></div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<div style="float: right; width: 400px; background: white;"><img width="400px" src="/static/kubebox/kubebox_tray_04.png" alt="" style="border:15px solid #FFF" /></div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<div style="float: left; width: 100px; background: white;"><img width="100px" src="/static/kubebox/kubebox_tray_05.png" alt="" style="border:15px solid #FFF" /></div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<div style="float: right; width: 400px; background: white;"><img width="400px" src="/static/kubebox/kubebox_tray_06.png" alt="" style="border:15px solid #FFF" /></div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
<p><strong><em>Comments are welcomed as usual, thank you!!!!</em></strong></p>
<h2 id="go-to-the-project-index">Go to the <a href="https://www.pubstack.com/blog/2019/05/21/kubebox.html">project index</a></h2>
<h2 id="update-log">Update log:</h2>
<div style="font-size:10px">
<blockquote>
<p><strong>2019/05/22:</strong> Initial version.</p>
</blockquote>
</div>
The Kubernetes in a box project2019-05-21T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2019/05/21/kubebox<p>Implementing cloud computing solutions that runs in hybrid environments might
be the final solution when comes to finding the best benefits/cost
ratio.</p>
<p>This post will be the main thread to build and describe the KIAB/Kubebox
project (<a href="http://www.kubebox.org">www.kubebox.org</a> and/or <a href="http://www.kiab.org">www.kiab.org</a>).</p>
<p>Spoiler alert!</p>
<p><img src="/static/kubebox/kubebox.jpg" alt="" /></p>
<h1 id="the-name">The name</h1>
<p>First thing first, the name.. I have in my mind two names having the same meaning.
The first one is KIAB (Kubernetes In A Box) this name came to my mind as
the <a href="https://es.wikipedia.org/wiki/Kiai">Kiai</a>
sound from karatekas (practitioners of karate).
The second one is more
traditional, “Kubebox”. I have no preference but
it would be awesome if you help me
to decide the official name for this project.</p>
<p><strong><em>Add a comment and contribute to select the project name!</em></strong></p>
<h1 id="introduction">Introduction</h1>
<p>This project is about to integrate together already market
available devices to run cloud software as an appliance.</p>
<p>The proof-of-concept delivered in this series of posts will allow people
to put a well-known set of hardware devices into a single chassis for
either create their own cloud appliances, research and development,
continuous integration, testing, home labs, staging or production-ready
environments or simply just for fun.</p>
<p>Hereby it’s humbly presented to you the design of KubeBox/KIAB
an open chassis specification for building cloud appliances.</p>
<p>The case enclosure is fully designed, and hopefully in the last phases
for building the first set of enclosures, now, the posts will appear
in the mean time I have some free cycles for writing the overall description.</p>
<h1 id="use-cases">Use cases</h1>
<p>Several use cases can be defined to run on a KubeBox chassis.</p>
<ul>
<li>AWS outpost.</li>
<li>Development environments.</li>
<li>EDGE.</li>
<li>Production Environments for small sites.</li>
<li>GitLab CI integration.</li>
<li>Demos for summits and conferences.</li>
<li>R&D: FPGA usage, deep learning, AI, TensorFlow, among many others.</li>
<li>Marketing WOW effect.</li>
<li>Training.</li>
</ul>
<h1 id="enclosure-design">Enclosure design</h1>
<p>The enclosure is designed as a rackable unit,
using 7U. It tries to minimize the space used to deploy
an up to 8-node cluster with redundancy for both power and networking.</p>
<h1 id="cloud-appliance-description">Cloud appliance description</h1>
<p>This build will be described across several sub-posts
linked from this main thread.
The posts will be created particularly without any specific order
depending on my availability.</p>
<ul>
<li>Backstory and initial parts selection.</li>
<li>Designing the case part 1: Design software.</li>
<li>A brief introduction to CAD software.</li>
<li>Designing the case part 2: U’s, brakes, and ghosts.</li>
<li>Designing the case part 3: Sheet thickness and bend radius.</li>
<li>Designing the case part 4: Parts Allowance (finish, tolerance, and fit).</li>
<li>Designing the case part 5: Vent cutouts and frickin’ laser beams!.</li>
<li>Designing the case part 6: Self-clinching nuts and standoffs.</li>
<li>Designing the case part 7: The standoffs strike back.</li>
<li>A brief primer on screws and PEMSERTs.</li>
<li>Designing the case part 8: Implementing PEMSERTs and screws.</li>
<li>Designing the case part 9: Bend reliefs and flat patterns.</li>
<li><a href="https://www.pubstack.com/blog/2019/05/22/kubebox-01-tray.html">Designing the case part 10: Tray caddy, to be used with GPU, Mother boards, disks, any other peripherals you want to add to the enclosure.</a></li>
<li>Designing the case part 11: Components rig.</li>
<li>Designing the case part 12: Power supply.</li>
<li>Designing the case part 13: Networking.</li>
<li>Designing the case part 14: 3D printed supports.</li>
<li>Designing the case part 15: Adding computing power.</li>
<li>Designing the case part 16: Adding Storage.</li>
<li>Designing the case part 17: Front display and bastion for automation.</li>
<li>Manufacturing the case part 1: PEMSERT installation.</li>
<li>Manufacturing the case part 2: Bending metal.</li>
<li>Manufacturing the case part 3: Bending metal.</li>
<li>KubeBox cloud appliance in detail!.</li>
<li>Manufacturing the case part 0: Getting quotes.</li>
<li>Manufacturing the case part 1: Getting the cases.</li>
<li>Software deployments: Reference architecture.</li>
<li>Design final source files for the enclosure design.</li>
<li>KubeBox is fully functional.</li>
</ul>
<h2 id="update-log">Update log:</h2>
<div style="font-size:10px">
<blockquote>
<p><strong>2019/05/21:</strong> Initial version.</p>
</blockquote>
</div>
Running Relax-and-Recover to save your OpenStack deployment2019-05-20T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2019/05/20/relax-and-recover-backups<p>ReaR is a pretty impressive disaster recovery
solution for Linux. Relax-and-Recover, creates both a
bootable rescue image and a backup of the associated files you choose.</p>
<p><img src="/static/ReAR_and_OpenStack.png" alt="" /></p>
<p>When doing disaster recovery of a system, this Rescue Image plays
the files back from the backup and so in the twinkling of
an eye the latest state.</p>
<p>Various configuration options are available for the rescue image.
For example, slim ISO files, USB sticks or even images for PXE
servers are generated. As many backup options are possible.
Starting with a simple archive file (eg * .tar.gz),
various backup technologies such as IBM Tivoli Storage Manager (TSM),
EMC NetWorker (Legato), Bacula or even Bareos can be addressed.</p>
<p>The ReaR written in Bash enables the skilful
distribution of Rescue Image and if necessary archive file via
NFS, CIFS (SMB) or another transport method in the network.
The actual recovery process then takes place via this transport route.</p>
<p>In this specific case, due to the nature of the OpenStack deployment we will
choose those protocols that are allowed by default in the Iptables rules (SSH, SFTP in particular).</p>
<p>But enough with the theory, here’s a practical example of one of many possible configurations.
We will apply this specific use of ReaR to recover
a failed control plane after a critical maintenance task (like an upgrade).</p>
<p><strong>01 - Prepare the Undercloud backup bucket.</strong></p>
<p>We need to prepare the place to store the backups from the Overcloud.
From the Undercloud, check you have enough space to make the backups
and prepare the environment. We will also create a user in the
Undercloud with no shell access to be able to push the backups from the
controllers or the compute nodes.</p>
<pre><code class="language-bash">groupadd backup
mkdir /data
useradd -m -g backup -d /data/backup backup
echo "backup:backup" | chpasswd
chown -R backup:backup /data
chmod -R 755 /data
</code></pre>
<p><strong>02 - Run the backup from the Overcloud nodes.</strong></p>
<p>Let’s install some required packages and run some previous
configuration steps.</p>
<pre><code class="language-bash">#Install packages
sudo yum install rear genisoimage syslinux lftp wget -y
#Make sure you are able to use sshfs to store the ReaR backup
sudo yum install fuse -y
sudo yum groupinstall "Development tools" -y
wget http://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/f/fuse-sshfs-2.10-1.el7.x86_64.rpm
sudo rpm -i fuse-sshfs-2.10-1.el7.x86_64.rpm
sudo mkdir -p /data/backup
sudo sshfs -o allow_other backup@undercloud-0:/data/backup /data/backup
#Use backup password, which is... backup
</code></pre>
<p>Now, let’s configure ReaR config file.</p>
<pre><code class="language-bash">#Configure ReaR
sudo tee -a "/etc/rear/local.conf" > /dev/null <<'EOF'
OUTPUT=ISO
OUTPUT_URL=sftp://backup:backup@undercloud-0/data/backup/
BACKUP=NETFS
BACKUP_URL=sshfs://backup@undercloud-0/data/backup/
BACKUP_PROG_COMPRESS_OPTIONS=( --gzip )
BACKUP_PROG_COMPRESS_SUFFIX=".gz"
BACKUP_PROG_EXCLUDE=( '/tmp/*' '/data/*' )
EOF
</code></pre>
<p>Now run the backup, this should create an ISO image in
the Undercloud node (/data/backup/).</p>
<p><strong>You will be asked for the backup user password</strong></p>
<pre><code class="language-bash">sudo rear -d -v mkbackup
</code></pre>
<p>Now, simulate a failure xD</p>
<pre><code># sudo rm -rf /lib
</code></pre>
<p>After the ISO image is created, we can proceed to
verify we can restore it from the Hypervisor.</p>
<p><strong>03 - Prepare the hypervisor.</strong></p>
<pre><code class="language-bash"># Enable the use of fusefs for the VMs on the hypervisor
setsebool -P virt_use_fusefs 1
# Install some required packages
sudo yum install -y fuse-sshfs
# Mount the Undercloud backup folder to access the images
mkdir -p /data/backup
sudo sshfs -o allow_other root@undercloud-0:/data/backup /data/backup
ls /data/backup/*
</code></pre>
<p><strong>04 - Stop the damaged controller node.</strong></p>
<pre><code class="language-bash">virsh shutdown controller-0
# virsh destroy controller-0
# Wait until is down
watch virsh list --all
# Backup the guest definition
virsh dumpxml controller-0 > controller-0.xml
cp controller-0.xml controller-0.xml.bak
</code></pre>
<p>Now, we need to change the guest definition to boot from the ISO file.</p>
<p>Edit controller-0.xml and update it to boot from the ISO file.</p>
<p>Find the OS section,add the cdrom device and enable the boot menu.</p>
<pre><code><os>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='yes'/>
</os>
</code></pre>
<p>Edit the devices section and add the CDROM.</p>
<pre><code><disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/data/backup/rear-controller-0.iso'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
</code></pre>
<p>Update the guest definition.</p>
<pre><code class="language-bash">virsh define controller-0.xml
</code></pre>
<p>Restart and connect to the guest</p>
<pre><code class="language-bash">virsh start controller-0
virsh console controller-0
</code></pre>
<p>You should be able to see the boot menu to start the recover process, select Recover controller-0 and follow the instructions.</p>
<p><img src="/static/ReAR1.PNG" alt="" /></p>
<p>Now, before proceeding to run the controller restore, it’s possible that
the host undercloud-0 can’t be resolved, just:</p>
<pre><code class="language-bash">echo "192.168.24.1 undercloud-0" >> /etc/hosts
</code></pre>
<p>Having resolved the Undercloud host, just follow the wizard, Relax And Recover :)</p>
<p>You yould see a message like:</p>
<pre><code>Welcome to Relax-and-Recover. Run "rear recover" to restore your system !
RESCUE controller-0:~ # rear recover
</code></pre>
<p><img src="/static/ReAR2.PNG" alt="" /></p>
<p>The image restore should progress quickly.</p>
<p><img src="/static/ReAR3.PNG" alt="" /></p>
<p>Continue to see the restore evolution.</p>
<p><img src="/static/ReAR4.PNG" alt="" /></p>
<p>Now, each time you reboot the node will have the ISO file
as the first boot option so it’s something we need to fix.
In the mean time let’s check if the restore went fine.</p>
<p>Reboot the guest booting from the hard disk.
<img src="/static/ReAR5.PNG" alt="" /></p>
<p>Now we can see that the guest VM started successfully.
<img src="/static/ReAR6.PNG" alt="" /></p>
<p>Now we need to restore the guest to it’s original definition,
so from the Hypervisor we need to restore the <code>controller-0.xml.bak</code>
file we created.</p>
<pre><code class="language-bash">#From the Hypervisor
virsh shutdown controller-0
watch virsh list --all
virsh define controller-0.xml.bak
virsh start controller-0
</code></pre>
<p>Enjoy.</p>
<h2 id="considerations">Considerations:</h2>
<ul>
<li>Space.</li>
<li>Multiple protocols supported but we might then to update firewall rules, that’s why I prefered SFTP.</li>
<li>Network load when moving data.</li>
<li>Shutdown/Starting sequence for HA control plane.</li>
<li>Do we need to backup the data plane?</li>
<li>User workloads should be handled by a third party backup software.</li>
</ul>
<h2 id="update-log">Update log:</h2>
<div style="font-size:10px">
<blockquote>
<p><strong>2019/05/20:</strong> Initial version.</p>
<p><strong>2019/06/18:</strong> Appeared in <a href="https://superuser.openstack.org/articles/tutorial-rear-openstack-deployment/">OpenStack Superuser blog.</a></p>
</blockquote>
</div>
TripleO - Deployment configurations2019-02-05T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2019/02/05/tripleo-quickstart-deployments<p>This post is a summary of the deployments I usually test for deploying TripleO
using quickstart.</p>
<p><img src="/static/dude-just-deploy-it-already.jpg" alt="" /></p>
<p>The following steps need to run in the Hypervisor node
in order to deploy both the Undercloud and the Overcloud.</p>
<p>You need to execute them one after the other, the idea of this recipe is to
have something just for copying/pasting.</p>
<p>Once the last step ends you can/should be able to connect to the
Undercloud VM to start operating your Overcloud deployment.</p>
<p>The usual steps are:</p>
<p><strong>01 - Prepare the hypervisor node.</strong></p>
<p>Now, let’s install some dependencies.
Same Hypervisor node, same <code>root</code> user.</p>
<pre><code class="language-bash"># In this dev. env. /var is only 50GB, so I will create
# a sym link to another location with more capacity.
# It will take easily more tan 50GB deploying a 3+1 overcloud
sudo mkdir -p /home/libvirt/
sudo ln -sf /home/libvirt/ /var/lib/libvirt
# Disable IPv6 lookups
# sudo bash -c "cat >> /etc/sysctl.conf" << EOL
# net.ipv6.conf.all.disable_ipv6 = 1
# net.ipv6.conf.default.disable_ipv6 = 1
# EOL
# sudo sysctl -p
# Enable IPv6 in kernel cmdline
# sed -i s/ipv6.disable=1/ipv6.disable=0/ /etc/default/grub
# grub2-mkconfig -o /boot/grub2/grub.cfg
# reboot
sudo yum groupinstall "Virtualization Host" -y
sudo yum install git lvm2 lvm2-devel -y
sudo yum install libvirt-python python-lxml libvirt -y
</code></pre>
<p><strong>02 - Create the toor user (from the Hypervisor node, as root).</strong></p>
<pre><code class="language-bash">sudo useradd toor
echo "toor:toor" | sudo chpasswd
echo "toor ALL=(root) NOPASSWD:ALL" \
| sudo tee /etc/sudoers.d/toor
sudo chmod 0440 /etc/sudoers.d/toor
sudo su - toor
cd
mkdir .ssh
ssh-keygen -t rsa -N "" -f .ssh/id_rsa
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
cat .ssh/id_rsa.pub | sudo tee -a /root/.ssh/authorized_keys
echo '127.0.0.1 127.0.0.2' | sudo tee -a /etc/hosts
export VIRTHOST=127.0.0.2
ssh root@$VIRTHOST uname -a
</code></pre>
<p>Now, follow as the <code>toor</code> user and prepare the Hypervisor node
for the deployment.</p>
<p><strong>03 - Clone repos and install deps.</strong></p>
<pre><code class="language-bash">git clone \
https://github.com/openstack/tripleo-quickstart
chmod u+x ./tripleo-quickstart/quickstart.sh
bash ./tripleo-quickstart/quickstart.sh \
--install-deps
sudo setenforce 0
</code></pre>
<p>Export some variables used in the deployment command.</p>
<p><strong>04 - Export common variables.</strong></p>
<pre><code class="language-bash">export CONFIG=~/deploy-config.yaml
export VIRTHOST=127.0.0.2
</code></pre>
<p>Now we will create the configuration file used for the deployment,
depending on the file you choose you will deploy different environments.</p>
<p><strong>05 - Click on the environment description to expand the recipe.</strong></p>
<details>
<summary><strong>OpenStack [Containerized & HA] - 1 Controller, 1 Compute</strong></summary>
<pre><code class="language-bash">
cat > $CONFIG << EOF
overcloud_nodes:
- name: control_0
flavor: control
virtualbmc_port: 6230
- name: compute_0
flavor: compute
virtualbmc_port: 6231
node_count: 2
containerized_overcloud: true
delete_docker_cache: true
enable_pacemaker: true
run_tempest: false
extra_args: >-
--libvirt-type qemu
--ntp-server pool.ntp.org
-e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml
EOF
</code></pre>
</details>
<details>
<summary><strong>OpenStack [Containerized & HA] - 3 Controllers, 1 Compute</strong></summary>
<pre><code class="language-bash">
cat > $CONFIG << EOF
overcloud_nodes:
- name: control_0
flavor: control
virtualbmc_port: 6230
- name: control_1
flavor: control
virtualbmc_port: 6231
- name: control_2
flavor: control
virtualbmc_port: 6232
- name: compute_1
flavor: compute
virtualbmc_port: 6233
node_count: 4
containerized_overcloud: true
delete_docker_cache: true
enable_pacemaker: true
run_tempest: false
extra_args: >-
--libvirt-type qemu
--ntp-server pool.ntp.org
--control-scale 3
--compute-scale 1
-e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml
EOF
</code></pre>
</details>
<details>
<summary><strong>OpenShift [Containerized] - 1 Controller, 1 Compute</strong></summary>
<pre><code class="language-bash">
cat > $CONFIG << EOF
# Original from https://github.com/openstack/tripleo-quickstart/blob/master/config/general_config/featureset033.yml
composable_scenario: scenario009-multinode.yaml
deployed_server: true
network_isolation: false
enable_pacemaker: false
overcloud_ipv6: false
containerized_undercloud: true
containerized_overcloud: true
# This enables TLS for the undercloud which will also make haproxy bind to the
# configured public-vip and admin-vip.
undercloud_generate_service_certificate: false
undercloud_enable_validations: false
# This enables the deployment of the overcloud with SSL.
ssl_overcloud: false
# Centos Virt-SIG repo for atomic package
add_repos:
# NOTE(trown) The atomic package from centos-extras does not work for
# us but its version is higher than the one from the virt-sig. Hence,
# using priorities to ensure we get the virt-sig package.
- type: package
pkg_name: yum-plugin-priorities
- type: generic
reponame: quickstart-centos-paas
filename: quickstart-centos-paas.repo
baseurl: https://cbs.centos.org/repos/paas7-openshift-origin311-candidate/x86_64/os/
- type: generic
reponame: quickstart-centos-virt-container
filename: quickstart-centos-virt-container.repo
baseurl: https://cbs.centos.org/repos/virt7-container-common-candidate/x86_64/os/
includepkgs:
- atomic
priority: 1
extra_args: ''
container_args: >-
# If Pike or Queens
#-e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml
# If Ocata, Pike, Queens or Rocky
#-e /home/stack/containers-default-parameters.yaml
# If >= Stein
-e /home/stack/containers-prepare-parameter.yaml
-e /usr/share/openstack-tripleo-heat-templates/openshift.yaml
# NOTE(mandre) use container images mirrored on the dockerhub to take advantage
# of the proxy setup by openstack infra
docker_openshift_etcd_namespace: docker.io/
docker_openshift_cluster_monitoring_namespace: docker.io/tripleomaster
docker_openshift_cluster_monitoring_image: coreos-cluster-monitoring-operator
docker_openshift_configmap_reload_namespace: docker.io/tripleomaster
docker_openshift_configmap_reload_image: coreos-configmap-reload
docker_openshift_prometheus_operator_namespace: docker.io/tripleomaster
docker_openshift_prometheus_operator_image: coreos-prometheus-operator
docker_openshift_prometheus_config_reload_namespace: docker.io/tripleomaster
docker_openshift_prometheus_config_reload_image: coreos-prometheus-config-reloader
docker_openshift_kube_rbac_proxy_namespace: docker.io/tripleomaster
docker_openshift_kube_rbac_proxy_image: coreos-kube-rbac-proxy
docker_openshift_kube_state_metrics_namespace: docker.io/tripleomaster
docker_openshift_kube_state_metrics_image: coreos-kube-state-metrics
deploy_steps_ansible_workflow: true
config_download_args: >-
-e /home/stack/config-download.yaml
--disable-validations
--verbose
composable_roles: true
overcloud_roles:
- name: Controller
CountDefault: 1
tags:
- primary
- controller
networks:
- External
- InternalApi
- Storage
- StorageMgmt
- Tenant
- name: Compute
CountDefault: 0
tags:
- compute
networks:
- External
- InternalApi
- Storage
- StorageMgmt
- Tenant
tempest_config: false
test_ping: false
run_tempest: false
EOF
</code></pre>
</details>
<p><br /></p>
<p>From the Hypervisor, as the <code>toor</code> user
run the deployment command to deploy
both your Undercloud and Overcloud.</p>
<p><strong>06 - Deploy TripleO.</strong></p>
<pre><code class="language-bash">bash ./tripleo-quickstart/quickstart.sh \
--clean \
--release master \
--teardown all \
--tags all \
-e @$CONFIG \
$VIRTHOST
</code></pre>
<div style="font-size:10px">
<blockquote>
<p><strong>Updated 2019/02/05:</strong> Initial version.</p>
<p><strong>Updated 2019/02/05:</strong> TODO: Test the OpenShift deployment.</p>
<p><strong>Updated 2019/02/06:</strong> Added some clarifications about where the commands should run.</p>
</blockquote>
</div>
Vote for the OpenStack Berlin Summit presentations!2018-07-24T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2018/07/24/openstack-berlin-summit-vote-for-presentations<p>I pushed some presentations for this year OpenStack summit in Berlin, the
presentations are related to updates, upgrades, backups, failures and restores.</p>
<p><img src="/static/OpenStack-Summit-2018.png" alt="" /></p>
<h2 id="please-vote">¡¡¡Please vote!!!</h2>
<ul>
<li><a href="https://www.openstack.org/summit/berlin-2018/vote-for-speakers/#/21961">TripleO presentation for Updates and Upgrades</a></li>
<li><a href="https://www.openstack.org/summit/berlin-2018/vote-for-speakers/#/22101">TripleO presentation for Backups and Restores</a></li>
</ul>
<p>Happy TripleOing!</p>
TripleO deep dive session #13 (Containerized Undercloud)2018-05-31T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2018/05/31/tripleo-deep-dive-session-13<p>This is the 13th release of the <a href="http://www.tripleo.org/">TripleO</a>
“Deep Dive” sessions</p>
<p>Thanks to <a href="https://dprince.github.io/">Dan Prince</a> & <a href="http://my1.fr/blog">Emilien Macchi</a>
for this deep dive session about the next step of the TripleO’s Undercloud evolution.</p>
<p>In this session, they will explain in detail the movement re-architecting the Undercloud
to move towards containers in order to reuse the containerized Overcloud ecosystem.</p>
<p>You can access the <a href="https://docs.google.com/presentation/d/17Sbo0i0o2AhQBSjYH7eUrXKVHJcItklGZUUleFiC0ZU/">presentation</a>
or the
<a href="https://etherpad.openstack.org/p/tripleo-deep-dive-containerized-undercloud">Etherpad</a> notes.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=lv233gPynwk">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/lv233gPynwk" frameborder="0" allowfullscreen=""></iframe>
</div>
<p><br />
<br /></p>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a>
to have access to all available content.</p>
Testing Undercloud backup and restore using Ansible2018-05-18T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2018/05/18/testing-undercloud-backup-and-restore-using-ansible<p>This post will introduce you about
how to run backups and restores using Ansible
in TripleO.</p>
<h1 id="testing-the-undercloud-backup-and-restore">Testing the Undercloud backup and restore</h1>
<p>It is possible to test how the Undercloud
backup and restore should be performed using
Ansible.</p>
<p>The following Ansible playbooks will show
how can be used Ansible to test the
backups execution in a test environment.</p>
<h2 id="creating-the-ansible-playbooks-to-run-the-tasks">Creating the Ansible playbooks to run the tasks</h2>
<p>Create a yaml file called uc-backup.yaml
with the following content:</p>
<pre><code>---
- hosts: localhost
tasks:
- name: Remove any previously created UC backups
shell: |
source ~/stackrc
openstack container delete undercloud-backups --recursive
ignore_errors: True
- name: Create UC backup
shell: |
source ~/stackrc
openstack undercloud backup --add-path /etc/ --add-path /root/
</code></pre>
<p>Create a yaml file called uc-backup-download.yaml
with the following content:</p>
<pre><code>---
- hosts: localhost
tasks:
- name: Make sure the temp folder used for the restore does not exist
become: true
file:
path: "/var/tmp/test_bk_down"
state: absent
- name: Create temp folder to unzip the backup
become: true
file:
path: "/var/tmp/test_bk_down"
state: directory
owner: "stack"
group: "stack"
mode: "0775"
recurse: "yes"
- name: Download the UC backup to a temporary folder (After breaking the UC we won't be able to get it back)
shell: |
source ~/stackrc
cd /var/tmp/test_bk_down
openstack container save undercloud-backups
- name: Unzip the backup
become: true
shell: |
cd /var/tmp/test_bk_down
tar -xvf UC-backup-*.tar
gunzip *.gz
tar -xvf filesystem-*.tar
- name: Make sure stack user can get the backup files
become: true
file:
path: "/var/tmp/test_bk_down"
state: directory
owner: "stack"
group: "stack"
mode: "0775"
recurse: "yes"
</code></pre>
<p>Create a yaml file called uc-destroy.yaml
with the following content:</p>
<pre><code>---
- hosts: localhost
tasks:
- name: Remove mariadb
become: true
yum: pkg=
state=absent
with_items:
- mariadb
- mariadb-server
- name: Remove files
become: true
file:
path: ""
state: absent
with_items:
- /root/.my.cnf
- /var/lib/mysql
</code></pre>
<p>Create a yaml file called uc-restore.yaml
with the following content:</p>
<pre><code>---
- hosts: localhost
tasks:
- name: Install mariadb
become: true
yum: pkg=
state=present
with_items:
- mariadb
- mariadb-server
- name: Restart MariaDB
become: true
service: name=mariadb state=restarted
- name: Restore the backup DB
shell: cat /var/tmp/test_bk_down/all-databases-*.sql | sudo mysql
- name: Restart MariaDB to perms to refresh
become: true
service: name=mariadb state=restarted
- name: Register root password
become: true
shell: cat /var/tmp/test_bk_down/root/.my.cnf | grep -m1 password | cut -d'=' -f2 | tr -d "'"
register: oldpass
- name: Clean root password from MariaDB to reinstall the UC
shell: |
mysqladmin -u root -p password ''
- name: Clean users
become: true
mysql_user: name="" host_all="yes" state="absent"
with_items:
- ceilometer
- glance
- heat
- ironic
- keystone
- neutron
- nova
- mistral
- zaqar
- name: Reinstall the undercloud
shell: |
openstack undercloud install
</code></pre>
<h2 id="running-the-undercloud-backup-and-restore-tasks">Running the Undercloud backup and restore tasks</h2>
<p>To test the UC backup and restore procedure, run from the UC
after creating the Ansible playbooks:</p>
<pre><code> # This playbook will create the UC backup
ansible-playbook uc-backup.yaml
# This playbook will download the UC backup to be used in the restore
ansible-playbook uc-backup-download.yaml
# This playbook will destroy the UC (remove DB server, remove DB files, remove config files)
ansible-playbook uc-destroy.yaml
# This playbook will reinstall the DB server, restore the DB backup, fix permissions and reinstall the UC
ansible-playbook uc-restore.yaml
</code></pre>
<h2 id="checking-the-undercloud-state">Checking the Undercloud state</h2>
<p>After finishing the Undercloud restore playbook the user should be able to execute again
any CLI command like:</p>
<pre><code> source ~/stackrc
openstack stack list
</code></pre>
<p>Source code available in <a href="https://github.com/ccamacho/tripleo-ansible/tree/master/undercloud-backup-restore-check">GitHub</a></p>
My 2nd birthday as a Red Hatter2018-03-01T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2018/03/01/2nd-birthday-as-a-red-hatter<p>This post will be about to speak about my experience working in TripleO as
a Red Hatter for the last 2 years.</p>
<div style="float: left; width: 230px; background: white;"><img width="230px" src="/static/bday.gif" alt="" style="border:15px solid #FFF" /></div>
<p>In my 2nd birthday as a Red Hatter, I have learned about many technologies,
really a lot… But the most intriguing thing is that here you never stop
learning. Not just because you just don’t want to learn new things, instead,
is because of the project’s nature, this project… TripleO…</p>
<div style="float: right; width: 230px; background: white;"><img width="230px" src="/static/tripleo_logo.png" alt="" style="border:15px solid #FFF" /></div>
<p>TripleO (Openstack On Openstack) is a software aimed to deploy OpenStack
services using the same OpenStack ecosystem, this means that we will deploy
a minimal OpenStack instance (Undercloud) and from there, deploy our production
environment (Overcloud)… Yikes! What a mouthful, huh? Put simply, TripleO
is an installer which should make integrators/operators/developers lives
easier, but the reality sometimes is far away from the expectation.</p>
<p>TripleO is capable of doing wonderful things, with a little of patience,
love, and dedication, your hands can be the right hands to deploy complex environments at easy.</p>
<p>One of the cool things being one of the programmers who write TripleO, from now
on TripleOers, is that many of us also use the software regularly. We are writing
code not just because we are told to do it, but because we want to improve it for our own purposes.</p>
<p>Part of the programmers’ motivation momentum have to do with TripleO’s open‐source
nature, so if you code in TripleO you are part of a community.</p>
<div style="float: left; width: 230px; background: white;"><img width="230px" src="/static/community.gif" alt="" style="border:15px solid #FFF" /></div>
<p>Congratulations! As a TripleO user or a TripleOer, you are a part of our community and
it means that you’re joining a diverse group that spans all age ranges, ethnicities,
professional backgrounds, and parts of the globe. We are a passionate bunch of crazy
people, proud of this “little” monster and more than willing to help
others enjoy using it as much as we do.</p>
<p>Getting to know the interface (the templates, Mistral, Heat, Ansible, Docker,
Puppet, Jinja, …) and how all components are tight together, probably is one of
the most daunting aspects of TripleO for newcomers (and not newcomers).
This for sure will raise the blood pressure of some of you who tried using TripleO
in the past, but failed miserably and gave up in frustration when it did not behave
as expected. Yeah.. sometimes that “$h1t” happens…</p>
<p>Although learning TripleO isn’t that easy, the architecture updates,
the decoupling of the role services “compostable roles”, the backup and restore
strategies, the integration of Ansible among many others have made great strides
toward alleviating that frustration, and the improvements continue through to today.</p>
<p>So this is the question…</p>
<p><img src="/static/fast_to.png" alt="" /></p>
<p>Is TripleO meant to be “fast to use” or “fast to learn”?</p>
<p>There is a significant way of describing software products, but we need to know what
our software will be used for… TripleO is designed to work at scale, it might be
easier to deploy manually a few controllers and computes, but what about deploying
100 computes, 3 controllers and 50 cinder nodes, all of them configured to be integrated
and work as one single “cloud”? Buum!.
So there we find the TripleO benefits if we want to make it scale we need to make it fast to use…</p>
<p>This means that we will find several customizations,
hacks, workarounds, to make it work as we need it.</p>
<p>The upside to this approach is that TripleO evolved to be super-ultra-giga
customizable so operators are enabled to produce great environments blazingly fast..</p>
<p>The downside, Jaja, yes.. there is a downside “or several”. As with most things that
are customized, TripleO became somewhat difficult for new people to understand.
Also, it’s incredibly hard to test all the possible deployments, and when a user does
non-standard or not supported customizations, the upgrades are not as intuitive as they need…</p>
<p>This trade‐off is what I mean when I say “fast to use versus fast to learn.”
You can be extremely productive with TripleO after you understand how it thinks “yes, it thinks”.</p>
<p>However, your first few deployments and patches might be arduous. Of course,
alleviating that potential pain is what our work is about. IMHO the pros are more than the
cons and once you find a niche to improve it will be a really nice experience.</p>
<p>Also, we have the TripleO YouTube channel a place to push video tutorials and deep dive sessions
driven by the community for the community.</p>
<p>For the Spanish community we have a 100% translated TripleO UI, go to https://translate.openstack.org
and help us to reach as many languages as possible!!!</p>
<div style="float: left; width: 230px; background: white;"><img width="230px" src="/static/logo.png" alt="" style="border:15px solid #FFF" /></div>
<p>www.pubstack.com was born on July 5th of 2016 (first GitHub commit), yeah is my way of expressing
my gratitude to the community doing some CtrlC + CtrlV recipes to avoid the frustration of working
with TripleO and not having something deployed and easy to be used ASAP.</p>
<p>Pubstack does not have much traffic but it reached superuser.openstack.org, the TripleO cheatsheets
were on devconf.cz and FOSDEM, so in general, is really nice. When people reference your writings
anywhere. Maybe in the future can evolve to be more related to ANsible and openSTACK ;) as TripleO
is adding more and more support for Ansible.</p>
<div style="float: right; width: 230px; background: white;"><img width="230px" src="/static/red_hat.png" alt="" style="border:15px solid #FFF" /></div>
<p>What about Red Hat? Yeahp, I have a long time speaking about the project but haven’t
spoken about the company making it all real.
Red Hat is the world’s leading provider of open source solutions,
using a community-powered approach to provide reliable and high-performing
cloud, virtualization, storage, Linux, and middleware technologies.</p>
<p>There is a strong feeling of belonging in Red Hat, you are part of a team, a culture and you are able to
find a perfect balance between your work and life. Also, having all people from all over the globe makes
a perfect place for sharing ideas and collaborate. Not all of it is good, i.e. Working mostly remotely
in upstream communities can be really hard to manage if you are not 100%
sure about the tasks that need to be done.</p>
<p>Keep rocking and become part of the TripleO community!</p>
TripleO deep dive session #12 (config-download)2018-02-23T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2018/02/23/tripleo-deep-dive-session-12<p>This is the 12th release of the <a href="http://www.tripleo.org/">TripleO</a> “Deep Dive” sessions</p>
<p>Thanks to <a href="http://blog-slagle.rhcloud.com/">James Slagle</a> for this new session, in which he
will describe and speak about a feature called <code>config-download</code>.</p>
<p>In this session we will have
an update for the TripleO
ansible integration called
<code>config-download</code>.
It’s about aplying all the software
configuration with Ansible instead
of doing it with the Heat agents.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=-6ojHT8P4RE">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/-6ojHT8P4RE" frameborder="0" allowfullscreen=""></iframe>
</div>
<p><br />
<br /></p>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a>
to have access to all available content.</p>
New TripleO quickstart cheatsheet2018-01-05T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2018/01/05/tripleo-quickstart-cheatsheet<p>I have created some cheatsheets for people starting to work on TripleO,
mostly to help them to bootstrap a development environment as quickly as possible.</p>
<p><a href="https://github.com/ccamacho/tripleo-graphics/tree/master/cheatsheets/old_style">The previous version</a>
of this cheatsheet series was used in
several community conferences (FOSDEM, DevConf.cz),
now, they are deprecated as
the way TripleO should be deployed changed considerably last months.</p>
<p>Here you have the latest version:</p>
<p><img src="/static/01-tripleo-cheatsheet-deploying-tripleo_p1.jpg" alt="" /></p>
<p><img src="/static/01-tripleo-cheatsheet-deploying-tripleo_p2.jpg" alt="" /></p>
<p>The source code of these bookmarks is available as usual on
<a href="https://github.com/ccamacho/tripleo-graphics/tree/master/cheatsheets/latest_style">GitHub</a></p>
<p>And this is the code if you want to execute it directly:</p>
<pre><code># 01 - Create the toor user.
sudo useradd toor
echo "toor:toor" | chpasswd
echo "toor ALL=(root) NOPASSWD:ALL" \
| sudo tee /etc/sudoers.d/toor
sudo chmod 0440 /etc/sudoers.d/toor
su - toor
# 02 - Prepare the hypervisor node.
cd
mkdir .ssh
ssh-keygen -t rsa -N "" -f .ssh/id_rsa
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
cat .ssh/id_rsa.pub | sudo tee -a /root/.ssh/authorized_keys
echo '127.0.0.1 127.0.0.2' | sudo tee -a /etc/hosts
export VIRTHOST=127.0.0.2
sudo yum groupinstall "Virtualization Host" -y
sudo yum install git lvm2 lvm2-devel -y
ssh root@$VIRTHOST uname -a
# 03 - Clone repos and install deps.
git clone \
https://github.com/openstack/tripleo-quickstart
chmod u+x ./tripleo-quickstart/quickstart.sh
bash ./tripleo-quickstart/quickstart.sh \
--install-deps
sudo setenforce 0
# 04 - Configure the TripleO deployment with Docker and HA.
export CONFIG=~/deploy-config.yaml
cat > $CONFIG << EOF
overcloud_nodes:
- name: control_0
flavor: control
virtualbmc_port: 6230
- name: compute_0
flavor: compute
virtualbmc_port: 6231
node_count: 2
containerized_overcloud: true
delete_docker_cache: true
enable_pacemaker: true
run_tempest: false
extra_args: >-
--libvirt-type qemu
--ntp-server pool.ntp.org
-e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml
EOF
# 05 - Deploy TripleO.
export VIRTHOST=127.0.0.2
bash ./tripleo-quickstart/quickstart.sh \
--clean \
--release master \
--teardown all \
--tags all \
-e @$CONFIG \
$VIRTHOST
</code></pre>
<p>Happy TripleOing!!!</p>
<h2 id="update-log">Update log:</h2>
<div style="font-size:10px">
<blockquote>
<p><strong>2018/01/05:</strong> Initial version.</p>
<p><strong>2019/01/16:</strong> Appeared in <a href="https://superuser.openstack.org/articles/new-tripleo-quick-start-cheatsheet/">OpenStack Superuser blog.</a></p>
</blockquote>
</div>
Automating Undercloud backups and a Mistral introduction for creating workbooks, workflows and actions2017-12-18T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2017/12/18/automating-the-undercloud-backup-and-mistral-workflows-intro<p>The goal of this developer documentation is to address the automated process
of backing up a TripleO Undercloud and to give developers a complete description
about how to integrate Mistral workbooks, workflows and actions into the Python
TripleO client.</p>
<p>This tutorial will be divided into several sections:</p>
<ol>
<li>Introduction and prerequisites</li>
<li>Undercloud backups</li>
<li>Creating a new OpenStack CLI command in python-tripleoclient (openstack
undercloud backup).</li>
<li>Creating Mistral workflows for the new python-tripleoclient CLI command.</li>
<li>Give support for new Mistral environment variables when installing the
undercloud.</li>
<li>Show how to test locally the changes in python-tripleoclient and
tripleo-common.</li>
<li>Give elevated privileges to specific Mistral actions that need to run with
elevated privileges.</li>
<li>Debugging actions</li>
<li>Unit tests</li>
<li>Why all previous sections are related to Upgrades?</li>
</ol>
<h2 id="1-introduction-and-prerequisites">1. Introduction and prerequisites</h2>
<p>Let’s assume you have a TripleO development environment healthy and working
properly. All the commands and customization we are going to run will run in
the Undercloud, as usual logged in as the stack user and having sourced the
stackrc file.</p>
<p>Then let’s proceed by cloning the repositories we are going to work with in a
temporary folder:</p>
<pre><code>mkdir dev-docs
cd dev-docs
git clone https://github.com/openstack/python-tripleoclient
git clone https://github.com/openstack/tripleo-common
git clone https://github.com/openstack/instack-undercloud
</code></pre>
<ul>
<li><strong>python-tripleoclient:</strong> Will define the OpenStack CLI commands.</li>
<li><strong>tripleo-common:</strong> Will have the Mistral logic.</li>
<li><strong>instack-undercloud:</strong> Allows to update and create mistral
environments to store configuration details needed when executing Mistral workflows.</li>
</ul>
<h2 id="2-undercloud-backups">2. Undercloud backups</h2>
<p>Most of the Undercloud back procedure is available in the
<a href="https://docs.openstack.org/tripleo-docs/latest/install/post_deployment/backup_restore_undercloud.html">TripleO official documentation site</a>.</p>
<p>We will focus on the automation of backing up the resources required to restore
the Undercloud in case of a failed upgrade.</p>
<ul>
<li>All MariaDB databases on the undercloud node</li>
<li>MariaDB configuration file on undercloud (so we can restore databases
accurately)</li>
<li>All glance image data in /var/lib/glance/images</li>
<li>All swift data in /srv/node</li>
<li>All data in stack users home directory</li>
</ul>
<p>For doing this we need to be able to:</p>
<ul>
<li>Connect to the database server as root.</li>
<li>Dump all databases to file.</li>
<li>Create a filesystem backup of several folders (and be able to access folders
with restricted access).</li>
<li>Upload this backup to a swift container to be able to get it from the TripleO
web UI.</li>
</ul>
<h2 id="3-creating-a-new-openstack-cli-command-in-python-tripleoclient-openstack-undercloud-backup">3. Creating a new OpenStack CLI command in python-tripleoclient (openstack undercloud backup).</h2>
<p>The first action needed is to be able to create a new CLI command for the
OpenStack client. In this case, we are going to implement the <strong>openstack
undercloud backup</strong> command.</p>
<pre><code>cd dev-docs
cd python-tripleoclient
</code></pre>
<p>Let’s list the files inside this folder:</p>
<pre><code>[stack@undercloud python-tripleoclient]$ ls
AUTHORS doc setup.py
babel.cfg LICENSE test-requirements.txt
bindep.txt zuul.d tools
build README.rst tox.ini
ChangeLog releasenotes tripleoclient
config-generator requirements.txt
CONTRIBUTING.rst setup.cfg
</code></pre>
<p>Once inside the <strong>python-tripleoclient</strong> folder we need to check the following
file:</p>
<p><strong>setup.cfg:</strong> This file defines all the CLI commands for the Python TripleO
client. Specifically, we will need at the end of this file our new command
definition:</p>
<pre><code>undercloud_backup = tripleoclient.v1.undercloud_backup:BackupUndercloud
</code></pre>
<p>This means that we have a new command defined as <strong>undercloud backup</strong> that
will instantiate the <strong>BackupUndercloud</strong> class defined in the file
<strong>tripleoclient/v1/undercloud_backup.py</strong></p>
<p>For further details related to this class definition please go to the
<a href="https://review.openstack.org/#/c/466213">gerrit review</a>.</p>
<p>Now, having our class defined we can call other methods to invoke Mistral in
this way:</p>
<pre><code>clients = self.app.client_manager
files_to_backup = ','.join(list(set(parsed_args.add_files_to_backup)))
workflow_input = {
"sources_path": files_to_backup
}
output = undercloud_backup.prepare(clients, workflow_input)
</code></pre>
<p>So forth, we will call the <strong>undercloud_backup.prepare</strong> method defined
in the file <strong>tripleoclient/workflows/undercloud_backup.py</strong> wich will
call the Mistral workflow:</p>
<pre><code>def prepare(clients, workflow_input):
workflow_client = clients.workflow_engine
tripleoclients = clients.tripleoclient
with tripleoclients.messaging_websocket() as ws:
execution = base.start_workflow(
workflow_client,
'tripleo.undercloud_backup.v1.prepare_environment',
workflow_input=workflow_input
)
for payload in base.wait_for_messages(workflow_client, ws, execution):
if 'message' in payload:
return payload['message']
</code></pre>
<p>In this case, we will create a loop within the tripleoclient and wait until we receive
a message from the Mistral workflow <strong>tripleo.undercloud_backup.v1.prepare_environment</strong>
that indicates if the invoked workflow ended correctly.</p>
<h2 id="4-creating-mistral-workflows-for-the-new-python-tripleoclient-cli-command">4. Creating Mistral workflows for the new python-tripleoclient CLI command.</h2>
<p>The next step is to define the
<strong>tripleo.undercloud_backup.v1.prepare_environment</strong> Mistral workflow, all the
Mistral workbooks, workflows and actions will be defined in the
<strong>tripleo-common</strong> repository.</p>
<p>Let’s go inside <strong>tripleo-common</strong></p>
<pre><code>cd dev-docs
cd tripleo-common
</code></pre>
<p>And see it’s conent:</p>
<pre><code>[stack@undercloud tripleo-common]$ ls
AUTHORS doc README.rst test-requirements.txt
babel.cfg HACKING.rst releasenotes tools
build healthcheck requirements.txt tox.ini
ChangeLog heat_docker_agent scripts tripleo_common
container-images image-yaml setup.cfg undercloud_heat_plugins
contrib LICENSE setup.py workbooks
CONTRIBUTING.rst playbooks sudoers zuul.d
</code></pre>
<p>Again we need to check the following file:</p>
<p>setup.cfg: This file defines all the Mistral actions we can call.
Specifically, we will need at the end of this file our new actions:</p>
<pre><code>tripleo.undercloud.get_free_space = tripleo_common.actions.undercloud:GetFreeSpace
tripleo.undercloud.create_backup_dir = tripleo_common.actions.undercloud:CreateBackupDir
tripleo.undercloud.create_database_backup = tripleo_common.actions.undercloud:CreateDatabaseBackup
tripleo.undercloud.create_file_system_backup = tripleo_common.actions.undercloud:CreateFileSystemBackup
tripleo.undercloud.upload_backup_to_swift = tripleo_common.actions.undercloud:UploadUndercloudBackupToSwift
</code></pre>
<h3 id="41-action-definition">4.1. Action definition</h3>
<p>Let’s take the first action to describe it’s definition,
<strong>tripleo.undercloud.get_free_space = tripleo_common.actions.undercloud:GetFreeSpace</strong></p>
<p>We have defined the action named as <strong>tripleo.undercloud.get_free_space</strong> which
will instantiate the class <strong>GetFreeSpace</strong> defined in the file
<strong>tripleo_common/actions/undercloud.py</strong> file.</p>
<p>If we open <strong>tripleo_common/actions/undercloud.py</strong> we can see the class definition as:</p>
<pre><code>class GetFreeSpace(base.Action):
"""Get the Undercloud free space for the backup.
The default path to check will be /tmp and the default minimum size will
be 10240 MB (10GB).
"""
def __init__(self, min_space=10240):
self.min_space = min_space
def run(self, context):
temp_path = tempfile.gettempdir()
min_space = self.min_space
while not os.path.isdir(temp_path):
head, tail = os.path.split(temp_path)
temp_path = head
available_space = (
(os.statvfs(temp_path).f_frsize * os.statvfs(temp_path).f_bavail) /
(1024 * 1024))
if (available_space < min_space):
msg = "There is no enough space, avail. - %s MB" \
% str(available_space)
return actions.Result(error={'msg': msg})
else:
msg = "There is enough space, avail. - %s MB" \
% str(available_space)
return actions.Result(data={'msg': msg})
</code></pre>
<p>In this specific case this class will check if there is enough space to perform
the backup. Later we will be able to inkove action as</p>
<pre><code>mistral run-action tripleo.undercloud.get_free_space
</code></pre>
<p>or use it workbooks.</p>
<h3 id="42-workflow-definition">4.2. Workflow definition.</h3>
<p>Once we have defined all our new actions, we need to orchestrate them in order
to have a fully working Mistral workflow.</p>
<p>All <strong>tripleo-comon</strong> workbooks are defined in the workbooks folder.</p>
<p>In the next example we have a workbook definition
with all actions inside it, in this case we put in the example
the first workflow with all the tasks involved.</p>
<pre><code>---
version: '2.0'
name: tripleo.undercloud_backup.v1
description: TripleO Undercloud backup workflows
workflows:
prepare_environment:
description: >
This workflow will prepare the Undercloud to run the database backup
tags:
- tripleo-common-managed
input:
- queue_name: tripleo
tasks:
# Action to know if there is enough available space
# to run the Undercloud backup
get_free_space:
action: tripleo.undercloud.get_free_space
publish:
status: <% task().result %>
free_space: <% task().result %>
on-success: send_message
on-error: send_message
publish-on-error:
status: FAILED
message: <% task().result %>
# Sending a message that the folder to create the backup was
# created succesfully
send_message:
action: zaqar.queue_post
retry: count=5 delay=1
input:
queue_name: <% $.queue_name %>
messages:
body:
type: tripleo.undercloud_backup.v1.launch
payload:
status: <% $.status %>
execution: <% execution() %>
message: <% $.get('message', '') %>
on-success:
- fail: <% $.get('status') = "FAILED" %>
</code></pre>
<p>The workflow its self explanatory, the only not so clear part might be the last
one as the workflow uses an action to send a message stating that the workflow
ended correctly. Passing as the message the output of the previous task, in
this case the result of the <strong>create_backup_dir</strong>.</p>
<h2 id="5-give-support-for-new-mistral-environment-variables-when-installing-the-undercloud">5. Give support for new Mistral environment variables when installing the undercloud.</h2>
<p>Sometimes is needed to use additional values inside a Mistral task. For example,
if we need to create a dump of a database we might need another that the
Mistral user credentials for authentication purposes.</p>
<p>Initially when the Undercloud is installed it’s
created a Mistral environment called
<strong>tripleo.undercloud-config</strong>.
This environment variable will have all required configuration details that we
can get from Mistral. This is defined in the <strong>instack-undercloud</strong> repository.</p>
<p>Let’s get into the repository and check the content of the file
<strong>instack_undercloud/undercloud.py</strong>.</p>
<p>This file defines a set of methods to interact with the Undercloud,
specifically the method called <strong>_create_mistral_config_environment</strong> allows to
configure additional environment variables when installing the Undercloud.</p>
<p>For additional testing, you can use the
<a href="https://gist.github.com/ccamacho/354f798102710d165c1f6167eb533caf#file-mistral_client_snippet-py">Python snippet</a>
to call Mistral client from the Undercloud node
available in gist.github.com.</p>
<h2 id="6-show-how-to-test-locally-the-changes-in-python-tripleoclient-and-tripleo-common">6. Show how to test locally the changes in python-tripleoclient and tripleo-common.</h2>
<p>If it’s needed a local test of a change in python-tripleoclient or
tripleo-common, the following procedures allow to test it locally.</p>
<p>For a change in <strong>python-tripleoclient</strong>, assuming you already have downloaded
the change you want to test, execute:</p>
<pre><code>cd python-tripleoclient
sudo rm -Rf /usr/lib/python2.7/site-packages/tripleoclient*
sudo rm -Rf /usr/lib/python2.7/site-packages/python_tripleoclient*
sudo python setup.py clean --all install
</code></pre>
<p>For a change in <strong>tripleo-common</strong>, assuming you already have downloaded the
change you want to test, execute:</p>
<pre><code>cd tripleo-common
sudo rm -Rf /usr/lib/python2.7/site-packages/tripleo_common*
sudo python setup.py clean --all install
sudo cp /usr/share/tripleo-common/sudoers /etc/sudoers.d/tripleo-common
# this loads the actions via entrypoints
sudo mistral-db-manage --config-file /etc/mistral/mistral.conf populate
# make sure the new actions got loaded
mistral action-list | grep tripleo
for workbook in workbooks/*.yaml; do
mistral workbook-create $workbook
done
for workbook in workbooks/*.yaml; do
mistral workbook-update $workbook
done
sudo systemctl restart openstack-mistral-executor
sudo systemctl restart openstack-mistral-engine
</code></pre>
<p>If we want to execute a Mistral action or a Mistral workflow you can execute:</p>
<p>Examples about how to test Mistral actions independently:</p>
<pre><code>mistral run-action tripleo.undercloud.get_free_space #Without parameters
mistral run-action tripleo.undercloud.get_free_space '{"path": "/etc/"}' # With parameters
mistral run-action tripleo.undercloud.create_file_system_backup '{"sources_path": "/tmp/asdf.txt,/tmp/asdf", "destination_path": "/tmp/"}'
</code></pre>
<p>Examples about how to test a Mistral workflow independently:</p>
<pre><code>mistral execution-create tripleo.undercloud_backup.v1.prepare_environment # No parameters
mistral execution-create tripleo.undercloud_backup.v1.filesystem_backup '{"sources_path": "/tmp/asdf.txt,/tmp/asdf", "destination_path": "/tmp/"}' # With parameters
</code></pre>
<h2 id="7-give-elevated-privileges-to-specific-mistral-actions-that-need-to-run-with-elevated-privileges">7. Give elevated privileges to specific Mistral actions that need to run with elevated privileges.</h2>
<p>Sometimes its is not possible to execute some restricted actions from the
Mistral user, for example, when creating the Undercloud backup we won’t be able
to access the <strong>/home/stack/</strong> folder to create a tarball of it. For this
cases it’s possible to execute elevates actions from the Mistral user:</p>
<p>This is the content of the <strong>sudoers</strong> in the root of the <strong>tripleo-common</strong>
repository at the time of the creatino of this guide.</p>
<pre><code>Defaults!/usr/bin/run-validation !requiretty
Defaults:validations !requiretty
Defaults:mistral !requiretty
mistral ALL = (validations) NOPASSWD:SETENV: /usr/bin/run-validation
mistral ALL = NOPASSWD: /usr/bin/chown -h validations\: /tmp/validations_identity_[A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_], \
/usr/bin/chown validations\: /tmp/validations_identity_[A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_], \
!/usr/bin/chown /tmp/validations_identity_* *, !/usr/bin/chown /tmp/validations_identity_*..*
mistral ALL = NOPASSWD: /usr/bin/rm -f /tmp/validations_identity_[A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_][A-Za-z0-9_], \
!/usr/bin/rm /tmp/validations_identity_* *, !/usr/bin/rm /tmp/validations_identity_*..*
mistral ALL = NOPASSWD: /bin/nova-manage cell_v2 discover_hosts *
mistral ALL = NOPASSWD: /usr/bin/tar --ignore-failed-read -C / -cf /tmp/undercloud-backup-*.tar *
mistral ALL = NOPASSWD: /usr/bin/chown mistral. /tmp/undercloud-backup-*/filesystem-*.tar
validations ALL = NOPASSWD: ALL
</code></pre>
<p>Here you can grant permissions for specific tasks in when executing Mistral
workflows from <strong>tripleo-common</strong></p>
<h2 id="7-debugging-actions">7. Debugging actions.</h2>
<p>Let’s assume the action is written, added to setup.cfg but not appeared.
Firstly, check if action was added by <code>sudo mistral-db-manage populate</code>. Run</p>
<pre><code>mistral action-list -f value -c Name | grep -e '^tripleo.undercloud'
</code></pre>
<p>If you don’t see your actions check output of <code>sudo mistral-db-manage populate</code> as</p>
<pre><code>sudo mistral-db-manage populate 2>&1| grep ERROR | less
</code></pre>
<p>The following output may indicate issues in code. Simply fix code.</p>
<pre><code>2018-01-01:00:59.730 7218 ERROR stevedore.extension [-] Could not load 'tripleo.undercloud.get_free_space': unexpected indent (undercloud.py, line 40): File "/usr/lib/python2.7/site-packages/tripleo_common/actions/undercloud.py", line 40
</code></pre>
<p>Execute single action, execute workflow from workbook to make sure it works as
designed.</p>
<h2 id="8-unit-tests">8. Unit tests</h2>
<p>Writing Unit test is essential instrument of Software Developer. Unit tests are
much faster that running Workflow itself. So, let’s write unit tests for written
action. Let’s add <strong>tripleo_common/tests/actions/test_undercloud.py</strong> file
with the following content in <strong>tripleo-comon</strong> repositiry.</p>
<pre><code>import mock
from tripleo_common.actions import undercloud
from tripleo_common.tests import base
class GetFreeSpaceTest(base.TestCase):
def setUp(self):
super(GetFreeSpaceTest, self).setUp()
self.temp_dir = "/tmp"
@mock.patch('tempfile.gettempdir')
@mock.patch("os.path.isdir")
@mock.patch("os.statvfs")
def test_run_false(self, mock_statvfs, mock_isdir, mock_gettempdir):
mock_gettempdir.return_value = self.temp_dir
mock_isdir.return_value = True
mock_statvfs.return_value = mock.MagicMock(
spec_set=['f_frsize', 'f_bavail'],
f_frsize=4096, f_bavail=1024)
action = undercloud.GetFreeSpace()
action_result = action.run(context={})
mock_gettempdir.assert_called()
mock_isdir.assert_called()
mock_statvfs.assert_called()
self.assertEqual("There is no enough space, avail. - 4 MB",
action_result.error['msg'])
@mock.patch('tempfile.gettempdir')
@mock.patch("os.path.isdir")
@mock.patch("os.statvfs")
def test_run_true(self, mock_statvfs, mock_isdir, mock_gettempdir):
mock_gettempdir.return_value = self.temp_dir
mock_isdir.return_value = True
mock_statvfs.return_value = mock.MagicMock(
spec_set=['f_frsize', 'f_bavail'],
f_frsize=4096, f_bavail=10240000)
action = undercloud.GetFreeSpace()
action_result = action.run(context={})
mock_gettempdir.assert_called()
mock_isdir.assert_called()
mock_statvfs.assert_called()
self.assertEqual("There is enough space, avail. - 40000 MB",
action_result.data['msg'])
</code></pre>
<p>Run</p>
<pre><code>tox -epy27
</code></pre>
<p>to see any unit tests errors.</p>
<h2 id="8-why-all-previous-sections-are-related-to-upgrades">8. Why all previous sections are related to Upgrades?</h2>
<ul>
<li>Undercloud backups are an important step before runing an Upgrade.</li>
<li>Writing developer docs will help people to create and develope new features.</li>
</ul>
<h2 id="9-references">9. References</h2>
<ul>
<li>http://www.dougalmatthews.com/2016/Sep/21/debugging-mistral-in-tripleo/</li>
<li>http://blog.johnlikesopenstack.com/2017/06/accessing-mistral-environment-in-cli.html</li>
<li>http://hardysteven.blogspot.com.es/2017/03/developing-mistral-workflows-for-tripleo.html</li>
</ul>
Restarting your TripleO hypervisor will break cinder volume service thus the overcloud pingtest2017-10-30T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2017/10/30/restarting-your-tripleo-hypervisor-will-break-cinder<p>I don’t usualy restart my hypervisor, today I had to install LVM2 and
virsh stopped to work so a restart was required, once the VMs were
up and running the overcloud pingtest failed as cinder was not able to start.</p>
<p>From your Overcloud controller run:</p>
<pre><code>sudo losetup -f /var/lib/cinder/cinder-volumes
sudo vgdisplay
sudo service openstack-cinder-volume restart
</code></pre>
<p>This will make your Overcloud pingtest work again.</p>
Create a TripleO snapshot before breaking it...2017-07-14T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2017/07/14/snapshots-for-your-tripleo-vms<p>The idea of this post is to show how developers can save some time
creating snapshots of their development environments for not
deploying it each time it breaks.</p>
<p>So, don’t waste time re-deploying your environment when testing submissions.</p>
<p>I’ll show here how to be a little more agile when
deploying your Undercloud/Overcloud for testing purposes.</p>
<p>Deploying a fully working development environment takes
around 3 hours with human supervision…
And breaking it just after deployed is not cool at all…</p>
<h1 id="step-1">Step 1</h1>
<p>Deploy your environment as usual.</p>
<h1 id="step-2">Step 2</h1>
<p>Create your Undercloud/Overcloud snapshots.
<strong>Do this as the stack user, otherwise
virsh won’t see the VMs</strong></p>
<pre><code># The VMs deployed are:
# $vms will have something like ne next line...
# vms=( "undercloud" "control_0" "compute_0" )
vms=( $(virsh list --all | grep running | awk '{print $2}') )
# List all VMs
virsh list --all
# List current snapshots
for i in "${vms[@]}"; \
do \
virsh snapshot-list --domain "$i"; \
done
# Dump VMs XLM and check for qemu
for i in "${vms[@]}"; \
do \
virsh dumpxml "$i" | grep -i qemu; \
done
# Create an initial snapshot for each VM
for i in "${vms[@]}"; \
do \
echo "virsh snapshot-create-as --domain $i --name $i-fresh-install --description $i-fresh-install --atomic"; \
virsh snapshot-create-as --domain "$i" --name "$i"-fresh-install --description "$i"-fresh-install --atomic; \
done
# List current snapshots (After they should be already created)
for i in "${vms[@]}"; \
do \
virsh snapshot-list --domain "$i"; \
done
#########################################################################################################
# Current libvirt version does not support live snapshots.
# error: Operation not supported: live disk snapshot not supported with this QEMU binary
# --disk-only and --live not yet available.
# Create the folder for the images
# cd
# mkdir ~/backup_images
# for i in "${vms[@]}"; \
# do \
# echo "<domainsnapshot>" > $i.xml; \
# echo " <memory snapshot='external' file='/home/stack/backup_images/$i.mem.snap2'/>" >> $i.xml; \
# echo " <disks>" >> $i.xml; \
# echo " <disk name='vda'>" >> $i.xml; \
# echo " <source file='/home/stack/backup_images/$i.disk.snap2'/>" >> $i.xml; \
# echo " </disk>" >> $i.xml; \
# echo " </disks>" >> $i.xml; \
# echo "</domainsnapshot>" >> $i.xml; \
# done
# for i in "${vms[@]}"; \
# do \
# echo "virsh snapshot-create $i --xmlfile ~/$i.xml --atomic"; \
# virsh snapshot-create $i --xmlfile ~/$i.xml --atomic; \
# done
</code></pre>
<h1 id="step-3">Step 3</h1>
<p>Break your environment xD</p>
<h1 id="step-4">Step 4</h1>
<p>Restore your snapshots</p>
<pre><code># Commented for safety reasons...
# i=compute_0
i=blehblehbleh
virsh list --all
virsh shutdown $i
sleep 120
virsh list --all
virsh snapshot-revert --domain $i --snapshotname $i-fresh-install --running
virsh list --all
</code></pre>
<h1 id="or-restore-them-all-at-once">Or restore them all at once</h1>
<p>vms=( $(virsh list –all | grep running | awk ‘{print $2}’) )</p>
<p>for i in “${vms[@]}”; <br />
do <br />
virsh shutdown $i; <br />
virsh snapshot-revert –domain $i –snapshotname $i-fresh-install –running; <br />
virsh list –all; <br />
done</p>
TripleO deep dive session #11 (i18n)2017-07-07T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2017/07/07/tripleo-deep-dive-session-11<p>This is the 11th release of the <a href="http://www.tripleo.org/">TripleO</a> “Deep Dive” sessions</p>
<p>In this session we will have
an update for the TripleO
internationalization project
for the TripleO UI,
gladly presented by Julie Pichon.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=dmAw7b2yUEo">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/dmAw7b2yUEo" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
OpenStack versions - Upstream/Downstream2017-06-27T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2017/06/27/openstack-versions-upstream-downstream<p>I’m adding this note as I’m prune
to forget how upstream and downstream
versions are matching.</p>
<ul>
<li>RHOS Version 0 = Diablo</li>
<li>RHOS Version 1 = Essex</li>
<li>RHOS Version 2 = Folsom</li>
<li>RHOS Version 3 = Grizzly</li>
<li>RHOS Version 4 = Havana</li>
<li>RHOS Version 5 = Icehouse</li>
<li>RHOS Version 6 = Juno</li>
<li>RHOS Version 7 = Kilo</li>
<li>RHOS Version 8 = Liberty</li>
<li>RHOS Version 9 = Mitaka</li>
<li>RHOS Version 10 = Newton</li>
<li>RHOS Version 11 = Ocata</li>
<li>RHOS Version 12 = Pike</li>
<li>RHOS Version 13 = Queens</li>
<li>RHOS Version 14 = R</li>
<li>RHOS Version 15 = S</li>
</ul>
TripleO deep dive session index2017-06-15T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index<p>This is a brief index with all TripleO deep dive sessions,
you can see all videos on the
<a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<blockquote>
<p>Sessions index:</p>
<p> * <a href="http://www.pubstack.com/blog/2016/07/11/tripleo-deep-dive-session-1.html">TripleO deep dive #1 (Quickstart deployment)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2016/07/18/tripleo-deep-dive-session-2.html">TripleO deep dive #2 (TripleO Heat Templates)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2016/07/22/tripleo-deep-dive-session-3.html">TripleO deep dive #3 (Overcloud deployment debugging)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2016/08/01/tripleo-deep-dive-session-4.html">TripleO deep dive #4 (Puppet modules)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2016/08/05/tripleo-deep-dive-session-5.html">TripleO deep dive #5 (Undercloud - Under the hood)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2016/08/15/tripleo-deep-dive-session-6.html">TripleO deep dive #6 (Overcloud - Physical network)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2017/01/16/tripleo-deep-dive-session-7.html">TripleO deep dive #7 (Undercloud - TripleO UI)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2017/05/04/tripleo-deep-dive-session-8.html">TripleO deep dive #8 (TripleO - Deployed server)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2017/05/05/tripleo-deep-dive-session-9.html">TripleO deep dive #9 (TripleO - Quickstart)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-10.html">TripleO deep dive #10 (TripleO - Containers)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2017/07/07/tripleo-deep-dive-session-11.html">TripleO deep dive #11 (TripleO - i18n)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2018/02/23/tripleo-deep-dive-session-12.html">TripleO deep dive #12 (TripleO - config-download)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2018/05/31/tripleo-deep-dive-session-13.html">TripleO deep dive #13 (TripleO - Containerized Undercloud)</a></p>
<p> * <a href="http://www.pubstack.com/blog/2020/02/18/tripleo-deep-dive-session-14.html">TripleO deep dive #14 (TripleO - Containerized deployments without Paunch)</a></p>
</blockquote>
TripleO deep dive session #10 (Containers)2017-06-15T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-10<p>This is the 10th release of the <a href="http://www.tripleo.org/">TripleO</a> “Deep Dive” sessions</p>
<p>In this session we will have
an update for the TripleO
containers effort, thanks
to Jiri Stransky.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=xhTwHfi65p8">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/xhTwHfi65p8" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
TripleO deep dive session #9 (TripleO - Quickstart)2017-05-05T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2017/05/05/tripleo-deep-dive-session-9<p>This is the ninth release of the <a href="http://www.tripleo.org/">TripleO</a> “Deep Dive” sessions</p>
<p>In this session we will have an overall
description for TripleO Quickstart, thanks
to Gabriele Cerami.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=PwHEgHJ9ePU">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/PwHEgHJ9ePU" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
TripleO deep dive session #8 (TripleO - Deployed server)2017-05-04T21:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2017/05/04/tripleo-deep-dive-session-8<p>This is the eight release of the <a href="http://www.tripleo.org/">TripleO</a> “Deep Dive” sessions</p>
<p>In this session we will have a full description
about the deployed server feature in TripleO thanks
to James Slagle.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=s8Hm4n9IjYg">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/s8Hm4n9IjYg" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
Installing TripleO Quickstart2017-02-24T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2017/02/24/install-tripleo-quickstart<p>This is a brief recipe about how to
manually install TripleO Quickstart in a remote
32GB RAM box and not dying trying it.</p>
<p>Now <code>instack-virt-setup</code> is deprecated :( :( :(
so the manual process needs to evolve and use OOOQ (TripleO Quickstart).</p>
<p>This post is a brief recipe about how to provision the Hypervisor node
and deploy an end-to-end development environment
based on TripleO-Quickstart.</p>
<p>From the hypervisor run:</p>
<pre><code class="language-bash"># In this dev. env. /var is only 50GB, so I will create
# a sym link to another location with more capacity.
# It will take easily more tan 50GB deploying a 3+1 overcloud
sudo mkdir -p /home/libvirt/
sudo ln -sf /home/libvirt/ /var/lib/libvirt
# Add default toor user
sudo useradd toor
echo "toor:toor" | chpasswd
echo "toor ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/toor
sudo chmod 0440 /etc/sudoers.d/toor
sudo yum install -y lvm2 lvm2-devel
su - toor
whoami
sudo yum groupinstall "Virtualization Host" -y
sudo yum install git -y
# Disable requiretty otherwise the deployment will fail...
sudo sed -i -e 's/Defaults[ \t]*requiretty/#Defaults requiretty/g' /etc/sudoers
cd
mkdir .ssh
ssh-keygen -t rsa -N "" -f .ssh/id_rsa
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
sudo bash -c "cat .ssh/id_rsa.pub >> /root/.ssh/authorized_keys"
sudo bash -c "echo '127.0.0.1 127.0.0.2' >> /etc/hosts"
export VIRTHOST=127.0.0.2
ssh root@$VIRTHOST uname -a
</code></pre>
<p>Now, we can start deploying TripleO Quickstart by following:</p>
<pre><code class="language-bash"># Source: http://rdo-ci-doc.usersys.redhat.com/docs/tripleo-environments/en/latest/oooq-downstream.html
# Downstream bits for OSP8 ...
# cd
# sudo yum -y install /usr/bin/c_rehash ca-certificates
# sudo update-ca-trust check
# sudo update-ca-trust force-enable
# sudo update-ca-trust enable
# wget cert.pem
# sudo cp cert.pem /etc/pki/tls/certs/
# sudo cp cert.pem /etc/pki/ca-trust/source/anchors/
# sudo c_rehash
# sudo update-ca-trust extract
# git clone https://github.com/openstack/tripleo-quickstart
# cd tripleo-quickstart
# wget http://rhos-release.virt.bos.redhat.com/ci-images/internal-requirements-new.txt
# cd
# chmod u+x ./tripleo-quickstart/quickstart.sh
# bash ./tripleo-quickstart/quickstart.sh --install-deps
# bash ./tripleo-quickstart/quickstart.sh -v --release rhos-8-baseos-undercloud --clean --teardown all --requirements "/home/toor/tripleo-quickstart/internal-requirements-new.txt" $VIRTHOST
</code></pre>
<pre><code class="language-bash">git clone https://github.com/openstack/tripleo-quickstart
chmod u+x ./tripleo-quickstart/quickstart.sh
printf "\n\nSee:\n./tripleo-quickstart/quickstart.sh --help for a full list of options\n\n"
bash ./tripleo-quickstart/quickstart.sh --install-deps
export VIRTHOST=127.0.0.2
export CONFIG=~/deploy-config.yaml
cat > $CONFIG << EOF
# undercloud_undercloud_hostname: undercloud.ratata-domain
# control_memory: 8192
# compute_memory: 6120
# undercloud_memory: 10240
# undercloud_vcpu: 4
# undercloud_workers: 3
# default_vcpu: 1
custom_nameserver: '10.16.36.29'
undercloud_undercloud_nameservers: '10.16.36.29'
overcloud_dns_servers: '10.16.36.29'
# node_count: 4
# overcloud_nodes:
# - name: control_0
# flavor: control
# virtualbmc_port: 6230
# - name: control_1
# flavor: control
# virtualbmc_port: 6231
# - name: control_2
# flavor: control
# virtualbmc_port: 6232
# - name: compute_0
# flavor: compute
# virtualbmc_port: 6233
# topology: >-
# --control-scale 3
# --compute-scale 1
extra_args: >-
--libvirt-type qemu
--ntp-server pool.ntp.org
# -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml
run_tempest: false
EOF
# We disable SELINUX as it breaks the deployment
# You will get some permission denied when running
# the Ansible playbooks
sudo setenforce 0
bash ./tripleo-quickstart/quickstart.sh \
--clean \
--release master \
--teardown all \
--tags all \
-e @$CONFIG \
$VIRTHOST
</code></pre>
<p>In the hypervisor run the following command to log-in in
the Undercloud:</p>
<pre><code class="language-bash">ssh -F /home/toor/.quickstart/ssh.config.ansible undercloud
# Add the TRIPLEO_ROOT var to stackrc
# to use with tripleo-ci
echo "export TRIPLEO_ROOT=~" >> stackrc
source stackrc
</code></pre>
<p>At this point you should have your development environment deployed correctly.</p>
<p>Clone the tripleo-ci repo:</p>
<pre><code class="language-bash">git clone https://github.com/openstack-infra/tripleo-ci
</code></pre>
<p>And, run the Overcloud pingtest:</p>
<pre><code class="language-bash">~/tripleo-ci/scripts/tripleo.sh --overcloud-pingtest
</code></pre>
<p>Enjoy TripleOing (~˘▾˘)~</p>
<p>Note: I had to execute the deployment command 3/4 times to have
an OK deployment, sometimes it just fails (i.e. timeout getting the images).</p>
<p>Note: From the host, <code>virsh list --all</code> will work only as the stack user.</p>
<p>Note: Each time you run the quickstart.sh command from the hypervisor
the UC and OC will be nuked (<code>--teardown all</code>), you will see tasks like ‘PLAY [Tear down undercloud and overcloud vms] **’.</p>
<p>Note: If you delete the Overcloud i.e. using <code>heat stack-delete overcloud</code> you can re-deploy what you
had by running the dynamically generated overcloud-deploy.sh script in the stack home folder from the UC.</p>
<p>Note: There are several options for TripleO Quickstart besides the basic
virthost deployment, check them here: <code>https://docs.openstack.org/developer/tripleo-quickstart/working-with-extras.html</code></p>
<div style="font-size:10px">
<blockquote>
<p><strong>Updated 2017/03/17:</strong> Bleh, had to execute several times the deployment command to have it working.. :/ I miss you instack-virt-setup</p>
<p><strong>Updated 2017/03/16:</strong> The --config option seems to be broken, using instead -e @~/deploy-config.yaml.</p>
<p><strong>Updated 2017/03/14:</strong> New workflow added.</p>
<p><strong>Updated 2017/02/27:</strong> Working fine.</p>
<p><strong>Updated 2017/02/23:</strong> Seems to work.</p>
<p><strong>Updated 2017/02/23:</strong> instack-virt-setup is deprecatred :( moving to tripleo-quickstart.</p>
</blockquote>
</div>
OpenStack and services for BigData applications2017-01-26T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2017/01/26/mad-for-openstack-meetup<p>Yesterday I had the opportunity of presenting together with Daniel Mellado
a brief talk about OpenStack and it’s integration with services to support
Big Data tools (OpenStack Sahara).</p>
<p><img src="/static/meetup_openstack.png" alt="" /></p>
<p>It was a combined talk for two Meetups
<a href="https://www.meetup.com/es-ES/MAD-for-OpenStack/events/237131857/">MAD-for-OpenStack</a>
and
<a href="https://www.meetup.com/es-ES/Data-Science-Madrid/events/236991190/">Data-Science-Madrid</a>.</p>
<p>The presentation is stored in
<a href="https://github.com/ccamacho/openstack-presentations/tree/master/2017-01-25-meetup-openstack101-bigdata">GitHub</a>.</p>
<p>Regrets:</p>
<ul>
<li>We prepared a 1 hour presentation that had to be presented in 20min.</li>
<li>Wasn’t able to have access to our demo server.</li>
</ul>
TripleO deep dive session #7 (Undercloud - TripleO UI)2017-01-16T16:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2017/01/16/tripleo-deep-dive-session-7<p>This is the seven release of the <a href="http://www.tripleo.org/">TripleO</a> “Deep Dive” sessions</p>
<p>In this session Liz Blanchard and Ana Krivokapic will give us some
bits about how to contribute to the <a href="https://github.com/openstack/tripleo-ui">TripleO UI</a> project.
Once checking this session we will have a general overview about the project’s
history, properties, architecture and contributing steps.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=9TseONVfLR8">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/9TseONVfLR8" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Here you will be able to see a quick overview about how to install the UI as a development environment.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/1puSvUqTKzw" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>The summarized steps are also available in <a href="http://www.pubstack.com/blog/2017/01/13/installing-tripleo-ui.html">this</a> blog post.</p>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
Installing the TripleO UI2017-01-13T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2017/01/13/installing-tripleo-ui<p>This is a brief recipe to use or install TripleO UI
in the Undercloud.</p>
<p>First, once installed the Undercloud, the TripleO UI
is already available in the 3000 port.</p>
<p>Let’s assume you have both root password for your
development environment and the Undercloud node.</p>
<p>TripleO-UI queries directly the endpoints (i.e. keystone)
from your browser, so we need the traffic for the net
192.168.24.0/24 forwarded from your workstation to the
Undercloud node in order to reach all required ports
(6385, 5000, 8004, 8080, 9000, 8989, 3000, 13385, 13000, 13004, 13808, 9000, 13989 and 443).</p>
<p>Let’s install sshuttle in your workstation.</p>
<pre><code>sudo yum install -y sshuttle
</code></pre>
<p>Now, let’s get the Undercloud IP and configure SSH with a ProxyCommand.</p>
<pre><code>undercloudIp=`ssh root@labserver "arp -e" | grep brext | grep -v incomplete | awk '{print $1}' | sed 's/\/.*$//'`
cat << EOF >> ~/.ssh/config
Host lab
Hostname labserver
User root
Host uc
Hostname $undercloudIp
User root
ProxyCommand ssh -vvvv -W %h:%p root@lab
EOF
</code></pre>
<p>sshuttle will ask you for your hypervisor and Undercloud root
password.</p>
<p>To start forwarding the traffic execute:</p>
<pre><code>sshuttle -e "ssh -vvv" -r root@uc -vvvv 192.168.24.0/24
</code></pre>
<p>Once you have done this, open from your browser http://192.168.24.1:3000/
and the TripleO UI should be shown correctly.</p>
<hr />
<p>It’s probable that you receive an error like: <strong>Connection to Keystone is not available</strong>.</p>
<p>This is because you are trying to access the Keystone endpoint from your
workstation and it fails as the certificate is self-signed.
In order to fix this, open the developer view in your browser
and check the endpoint you are using to access keystone.
For example, https://192.168.24.2/keystone/v2.0/tokens
now open this URL in your browser and acept the certificate. If you
do this the Keystone error should go away.</p>
<hr />
<p>If you need a TripleO UI development environment follow:</p>
<p>The first step will be to install the TripleO UI and
all the dependencies.</p>
<pre><code class="language-bash"> cd
sudo yum install -y nodejs npm tmux
git clone https://github.com/openstack/tripleo-ui.git
cd tripleo-ui
npm install
</code></pre>
<p>Now, we need to update all the TripleO UI config files</p>
<pre><code class="language-bash"> cd
cp ~/tripleo-ui/config/tripleo_ui_config.js.sample ~/tripleo-ui/config/tripleo_ui_config.js
echo "Changing the default IP"
export ENDPOINT_ADDR=$(cat stackrc | grep OS_AUTH_URL= | awk -F':' '{print $2}'| tr -d /)
sed -i "s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/$ENDPOINT_ADDR/g" ~/tripleo-ui/config/tripleo_ui_config.js
echo "Removing comments for the keystone URI"
sed -i '/^ \/\/ '\''keystone'\''\:/s/^ \/\///' ~/tripleo-ui/config/tripleo_ui_config.js
echo "Removing comments for the zaqar_default_queue"
sed -i '/^ \/\/ '\''zaqar_default_queue'\''\:/s/^ \/\///' ~/tripleo-ui/config/tripleo_ui_config.js
# Uncomment all the parameters
# sed -i '/^ \/\/ '\''.*'\''\:/s/^ \/\///' ~/tripleo-ui/config/tripleo_ui_config.js
echo "Changing listening port for the dev server, 3000 already used"
sed -i '/port: 3000/s/3000/33000/' ~/tripleo-ui/webpack.dev.js
</code></pre>
<p>In the following step we will use tmux to persist the service running
for debugging purposes.</p>
<pre><code class="language-bash"> cd
tmux new -s tripleo-ui
cd ~/tripleo-ui/
npm start
</code></pre>
<p>At this stage you should have up and running your node server (33000 port).</p>
<p>If you followed the first step to see the default TripleO UI installation
go to log in the TripleO UI: http://192.168.24.1:33000/</p>
<p>Happy TripleOing!</p>
<div style="font-size:10px">
<blockquote>
<p><strong>Updated 2017/01/13:</strong> First version.</p>
<p><strong>Updated 2017/01/14:</strong> Add default TripleO UI info. Still getting 'Connection to Keystone is not available'
the config params are correct, checking it...</p>
<p><strong>Updated 2017/01/17:</strong> Forwarded all the required ports using sshuttle.</p>
</blockquote>
</div>
Printed TripleO cheatsheets for FOSDEM/DevConf (feedback needed)2016-12-16T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/12/16/printing-tripleo-cheat-sheet<p>We are working preparing some cheatsheets for people jumping into TripleO.</p>
<p>So there is an early WIP version for a few cheatsheets that we want to share:</p>
<blockquote>
<p>TripleO manual installation (Just a copy/paste step-wise process to install TripleO).</p>
<p>Deployments - Debugging tips (Relevant commands to know what’s happening with the deployment).</p>
<p>Deployments - CI (URL’s and resources to check our CI status).</p>
<p>OOOQ installation (Also a step-wise recipe to install OOOQ, does not exist yet).</p>
</blockquote>
<p>We already have some drafts available in <a href="https://github.com/ccamacho/tripleo-cheatsheet">GitHub</a>.</p>
<p>So, we will like to have some feedback from the community and make a
stable version for the cheatsheets in the next week.</p>
<p>Feedback for adding/removing content and general reviews about all of
them is welcomed.</p>
<p>Thanks!!!!</p>
Testing composable upgrades2016-11-28T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/11/28/testing-composable-upgrades<p>This is a brief recipe about how I’m testing
composable upgrades O->P.</p>
<p>Based on the original shardy’s notes
from <a href="http://paste.openstack.org/show/590436/">this</a> link.</p>
<p>The following steps are followed to upgrade your Overcloud from Ocata to latest master (Pike).</p>
<ul>
<li>
<p>Deploy latest master TripleO following <a href="http://www.pubstack.com/blog/2016/07/04/manually-installing-tripleo-recipe.html">this</a> post.</p>
</li>
<li>
<p>Remove the current Overcloud deployment.</p>
</li>
</ul>
<pre><code> source stackrc
heat stack-delete overcloud
</code></pre>
<ul>
<li>Remove the Overcloud images and create new ones (for the Overcloud).</li>
</ul>
<pre><code> cd
openstack image list
openstack image delete <image_ID> #Delete all the Overcloud images overcloud-full*
rm -rf /home/stack/overcloud-full.*
export STABLE_RELEASE=ocata
export USE_DELOREAN_TRUNK=1
export DELOREAN_TRUNK_REPO="https://trunk.rdoproject.org/centos7-ocata/current/"
export DELOREAN_REPO_FILE="delorean.repo"
/home/stack/tripleo-ci/scripts/tripleo.sh --overcloud-images
# Or reuse images
# wget https://images.rdoproject.org/ocata/delorean/current-tripleo/stable/overcloud-full.tar
# tar -xvf overcloud-full.tar
# openstack overcloud image upload --update-existing
</code></pre>
<ul>
<li>Download Ocata tripleo-heat-templates.</li>
</ul>
<pre><code> cd
git clone -b stable/ocata https://github.com/openstack/tripleo-heat-templates tht-ocata
</code></pre>
<ul>
<li>Configure the DNS (needed when upgrading the Overcloud).</li>
</ul>
<pre><code> neutron subnet-update `neutron subnet-list | grep ctlplane-subnet | awk '{print $2}'` --dns-nameserver 192.168.122.1
</code></pre>
<ul>
<li>Deploy an Ocata Overcloud.</li>
</ul>
<pre><code> openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /home/stack/tht-ocata/ \
-e /home/stack/tht-ocata/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tht-ocata/environments/puppet-pacemaker.yaml
</code></pre>
<ul>
<li>Install prerequisites in nodes (if no DNS configured this will fail, so make sure they have Intenet access), check that your nodes can connect to Internet.</li>
</ul>
<pre><code>cat > upgrade_repos.yaml << EOF
parameter_defaults:
UpgradeInitCommand: |
set -e
#Master repositories
sudo curl -o /etc/yum.repos.d/delorean.repo https://trunk.rdoproject.org/centos7-master/current-passed-ci/delorean.repo
sudo curl -o /etc/yum.repos.d/delorean-deps.repo https://trunk.rdoproject.org/centos7/delorean-deps.repo
export HOME=/root
cd /root/
if [ ! -d tripleo-ci ]; then
git clone https://github.com/openstack-infra/tripleo-ci.git
else
pushd tripleo-ci
git checkout master
git pull
popd
fi
if [ ! -d tripleo-heat-templates ]; then
git clone https://github.com/openstack/tripleo-heat-templates.git
else
pushd tripleo-heat-templates
git checkout master
git pull
popd
fi
./tripleo-ci/scripts/tripleo.sh --repo-setup
sed -i "s/includepkgs=/includepkgs=python-heat-agent*,/" /etc/yum.repos.d/delorean-current.repo
#yum -y install python-heat-agent-ansible
yum install -y python-heat-agent-*
rm -f /usr/libexec/os-apply-config/templates/etc/puppet/hiera.yaml
rm -f /usr/libexec/os-refresh-config/configure.d/40-hiera-datafiles
rm -f /etc/puppet/hieradata/*.yaml
yum remove -y python-UcsSdk openstack-neutron-bigswitch-agent python-networking-bigswitch openstack-neutron-bigswitch-lldp python-networking-odl
crudini --set /etc/ansible/ansible.cfg DEFAULT library /usr/share/ansible-modules/
EOF
</code></pre>
<ul>
<li>Download master tripleo-heat-templates.</li>
</ul>
<pre><code> cd
git clone https://github.com/openstack/tripleo-heat-templates tht-master
</code></pre>
<ul>
<li>Upgrade Overcloud to master</li>
</ul>
<pre><code> cd
openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /home/stack/tht-master/ \
-e /home/stack/tht-master/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tht-master/environments/puppet-pacemaker.yaml \
-e /home/stack/tht-master/environments/major-upgrade-composable-steps.yaml \
-e upgrade_repos.yaml
</code></pre>
<p>Note: if upgrading to a containerized Overcloud (Pike and beyond) do:</p>
<pre><code>cat > docker_registry.yaml << EOF
parameter_defaults:
DockerNamespace: 192.168.24.1:8787/tripleoupstream
DockerNamespaceIsRegistry: true
EOF
# This will take some time...
openstack overcloud container image upload --config-file /usr/share/openstack-tripleo-common/container-images/overcloud_containers.yaml
openstack overcloud container image prepare \
--namespace tripleoupstream \
--tag latest \
--env-file docker-centos-tripleoupstream.yaml
cd
source ~/stackrc
export THT=/home/stack/tht-master
openstack overcloud deploy --templates $THT \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
-e $THT/overcloud-resource-registry-puppet.yaml \
-e $THT/environments/puppet-pacemaker.yaml \
-e $THT/environments/major-upgrade-composable-steps.yaml \
-e upgrade_repos.yaml \
-e $THT/environments/docker.yaml \
-e $THT/environments/docker-ha.yaml \
-e $THT/environments/major-upgrade-composable-steps-docker.yaml \
-e docker-centos-tripleoupstream.yaml \
-e docker_registry.yaml
</code></pre>
<ul>
<li>Run the converge step ** Not tested on the containerized upgrade **</li>
</ul>
<pre><code> cd
openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /home/stack/tht-master/ \
-e /home/stack/tht-master/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tht-master/environments/puppet-pacemaker.yaml \
-e /home/stack/tht-master/environments/major-upgrade-converge.yaml
</code></pre>
<p>If the last steps manage to finish successfully, you just have upgraded your Overcloud from Ocata to Pike (latest master).</p>
<p>For more resources related to TripleO deployments, check out the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA">TripleO YouTube channel</a>.</p>
<div style="font-size:10px">
<blockquote>
<p><strong>Updated 2017/01/28:</strong> Working fine.</p>
</blockquote>
</div>
TripleO cheatsheet2016-11-26T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/11/26/tripleo-cheatsheet<p>This is a cheatsheet some of my regularly used
commands to test, develop or debug
TripleO deployments.</p>
<p>Deployments</p>
<ul>
<li>
<code class="highlighter-rouge">
swift download overcloud
</code><br />
<p class="tdesc">
Download the <code class="highlighter-rouge">overcloud</code> swift container files in the current folder (With the rendered j2 templates).
</p>
</li>
<li>
<code class="highlighter-rouge">
heat resource-list --nested-depth 5 overcloud | grep FAILED
</code><br />
<p class="tdesc">
Show resources, filtering to get those who have failed.
</p>
</li>
<li>
<code class="highlighter-rouge">
heat deployment-show <deployment_ID>
</code><br />
<p class="tdesc">
Get the deployment details for <deployment_ID>.
</p>
</li>
<li>
<code class="highlighter-rouge">
openstack image list
</code><br />
<p class="tdesc">
List images.
</p>
</li>
<li>
<code class="highlighter-rouge">
openstack image delete <image_ID>
</code><br />
<p class="tdesc">
Delete <image_ID>.
</p>
</li>
<li>
<code class="highlighter-rouge">
wget http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/<release>/delorean/overcloud-full.tar
</code><br />
<p class="tdesc">
Download <release> overcloud images tar file [liberty|mitaka|newton|...]
</p>
</li>
<li>
<code class="highlighter-rouge">
openstack overcloud image upload --update-existing
</code><br />
<p class="tdesc">
Once downloaded the images, this command will upload them to glance.
</p>
</li>
</ul>
<p>Debugging CI</p>
<ul>
<li>
<code class="highlighter-rouge">
http://status.openstack.org/zuul/
</code><br />
<p class="tdesc">
Check submissions CI status.
</p>
</li>
<li>
<code class="highlighter-rouge">
wget -e robots=off -r --no-parent <patch_ID>
</code><br />
<p class="tdesc">
Download all logs from <patch_ID>.
</p>
</li>
<li>
<code class="highlighter-rouge">
console.html
</code>
&
<code class="highlighter-rouge">
logs/postci.txt.gz
</code><br />
<p class="tdesc">
Relevant log files when debuging a TripleO Gerrit job.
</p>
</li>
</ul>
<blockquote>
<p>If you think there are more
useful commands to add to the list
just add a <a href="https://github.com/pubstack/pubstack.github.io/issues/22">comment</a>!</p>
</blockquote>
<p><strong>Happy TripleOing!</strong></p>
Enabling nested KVM support for a instack-virt-setup deployment.2016-11-21T12:30:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/11/21/enabling-nested-kvm-on-tripleo-host<p>The following bash snippet will enable
nested KVM support in the host when deploying TripleO
using instack-virt-setup.</p>
<p>This will work in AMD or Intel architectures.</p>
<pre><code class="language-bash">#!/bin/bash
echo "Checking if nested KVM is enabled in the host."
ARCH=$(lscpu | grep Architecture | head -1 | awk '{print $2}')
if [[ $ARCH == 'x86_64' ]]; then
ARCH_BRAND=intel
KVM_STATUS_FILE=/sys/module/kvm_intel/parameters/nested
ENABLE_NESTED_KVM=Y
else
ARCH_BRAND=amd
KVM_STATUS_FILE=/sys/module/kvm_amd/parameters/nested
ENABLE_NESTED_KVM=1
fi
if [[ -f $KVM_STATUS_FILE ]]; then
KVM_CURRENT_STATUS=$(head -n 1 $KVM_STATUS_FILE)
if [[ "${KVM_CURRENT_STATUS^^}" -ne "${ENABLE_NESTED_KVM^^}" ]]; then
echo "This host does not have nested KVM enabled, enabling."
sudo rmmod kvm-$ARCH_BRAND
sudo sh -c "echo 'options kvm-$ARCH_BRAND nested=$ENABLE_NESTED_KVM' >> /etc/modprobe.d/dist.conf"
sudo modprobe kvm-$ARCH_BRAND
else
echo "Nested KVM support is already enabled."
fi
else
echo "$KVM_STATUS_FILE does not exist."
fi
</code></pre>
<p>By default nested virtualization with KVM is disabled in the
host, so in order to run the overcloud-pingtest correctly we have two
options. Either run the previous snippet on the host,
or, when deploying the Compute node in a virtual machine
add <code>--libvirt-type qemu</code> to the deployment command.
Otherwise launching instances on the deployed overcloud will fail.</p>
<p>Here you have an example of the deployment command, fixing
libvirt to qemu.</p>
<pre><code class="language-bash">cd
openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tripleo-heat-templates/environments/puppet-pacemaker.yaml
</code></pre>
<p>Have a happy TripleO deployment!</p>
Ocata OpenStack summit 2016 - Barcelona2016-11-21T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/11/21/openstack-summit-2016-bcn<p>A few weeks ago I had the opportunity to attend to the Barcelona
OpenStack summit ‘Ocata design session’ and this post is related
to collect some overall information about it. In order to achieve this,
I’m crawling into my paper notes to highlight the aspects IMHO are relevant.</p>
<p><img src="/static/openstack-summit-2016-bcn.jpeg" alt="" /></p>
<hr />
<blockquote>
<p>Sessions list by date.</p>
</blockquote>
<h3 id="tuesday---oct-25th">Tuesday - Oct. 25th</h3>
<ul>
<li>RDO Booth: Carlos Camacho TripleO composable roles demo (12:15pm-12:55pm)</li>
<li>What the Heck is OoO: Owls All the Way Down (5:55pm – 6:35)</li>
</ul>
<h3 id="wednesday---oct-26th">Wednesday - Oct. 26th</h3>
<ul>
<li>Anomaly Detection in Contrail Networking (1:15pm-1:29pm)</li>
<li>Freezer: Plugin Architecture and Deduplication (3:05pm-3:45pm)</li>
<li>TripleO: Containers - Current Status and Roadmap (3:55pm-4:35pm)</li>
<li>TripleO: Work Session - Growing the team (5:05pm-5:45pm)</li>
<li>TripleO: Work Session - CI - current status and roadmap (5:55pm-6:35pm)</li>
</ul>
<h3 id="thursday---oct-27th">Thursday - Oct. 27th</h3>
<ul>
<li>Zuul v3: OpenStack and Ansible Native CI/CD (11:00am-11:40am)</li>
<li>The Latest in the Container World and the Role of Container in OpenStack (11:50am-12:30pm)</li>
<li>TripleO: Upgrades - current status and roadmap (1:50pm-2:30pm)</li>
<li>Mistral: Mistral and StackStorm (3:30pm-4:10pm)</li>
<li>Nokia: TOSCA & Mistral: Orchestrating End-to-End Telco Grade NFV (5:30pm-6:10pm)</li>
</ul>
<h3 id="friday---oct-28th">Friday - Oct. 28th</h3>
<ul>
<li>TripleO: Work Session - Composable Undercloud deployment with Heat (9:00am-9:20am)</li>
<li>TripleO: Work Session - GUI, CLI, Validations current status, roadmap, requirements (9:20am-9:40am)</li>
<li>TripleO: Work Session - Multiple topics - Blueprints, specs, tools and Ocata summary. (9:50am-10:30am)</li>
</ul>
<hr />
<p>Beyond the analysis of “What I did there in a week” I want
to state a few relevenat facts for me.</p>
<blockquote>
<p>Why is important to attend to a design session (My case: Upstream TripleO developer)?</p>
</blockquote>
<p>I think when working remotely in OpenStack projects, for example,
in such a complex project as TripleO is really hard to know what other
people are doing. So forth, design sessions force engineers to realize
about your peers work on different OpenStack projects or even in the same project ;)</p>
<p>This will give you some ideas for future features, new services to integrate,
issues that you might have in the future among many others. Also for TripleO
specific case if you are interested in working in a specific service, you can
get into those services sessions to know more about them.</p>
<blockquote>
<p>Where is the value for companies when sending engineers to design sessions?</p>
</blockquote>
<p>There might be several answers for this question, but I believe the overall
answer will be that sending engineers to the design sessions allow engineers
to be aligned based on companies goals, mostly for cases when several companies are
related to the same project. Also allow team members to know each others,
maybe this can be a soft benefit, for me as important as being aligned for
future features or architectural agreements.</p>
<blockquote>
<p>Is it really mandatory to send people to design sessions?</p>
</blockquote>
<p>I think this is not mandatory at all, but at the a relevant factor
might be that all knowledge/value generate in those sessions can
be delivered and processed by the rest of the team members.</p>
<blockquote>
<p>Do attendees gain value when attending to these design sessions?</p>
</blockquote>
<p>Off course they will gain lot of value:</p>
<ul>
<li>You might have a wrong impression about people on IRC, meeting them in person can change this dramatically ‘or not’.</li>
<li>Know better and engage with your team members and other peers.</li>
<li>Better alignment with your project goals.</li>
<li>Discuss blueprints and have a better understanding about the features life-cycle and roadmap.</li>
<li>Know what other people are doing.</li>
<li>Improve you overall knowledge about other projects (This time is for doing that “Not in your free time if you like it like me”).</li>
</ul>
<blockquote>
<p>If we are in a design session, are we in “working mode” or just in “learning mode”?</p>
</blockquote>
<p>Hardest question by far, I had the remorse of feeling that I was not working enough.
There were a lot of distractions if you wanted to actually make some time for coding or
reviewing submissions. But that’s the thing, I believe is good time to align and after
the summit you will always have time for coding :)</p>
<blockquote>
<p>What about some business alignment?</p>
</blockquote>
<p>I believe this is also an important factor to know about how OpenStack
evolves with the time, release announcements and how summits are actually evolving.</p>
Querying haproxy data using socat from CLI2016-11-04T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/11/04/querying-haproxy-data-using-socat-from-cli<p>Currently (most users) I don’t have any way to check the haproxy status in a TripleO virtual deployment (via web-browser) if not previously created some tunnels enabled for that purpose.</p>
<p>So let’s check some haproxy data from our controller.</p>
<p>Check the controller IP:</p>
<pre><code class="language-bash">nova-list
</code></pre>
<p>Connect to the controller:</p>
<pre><code>ssh heat-admin@192.0.2.22
</code></pre>
<p>Now, we need to have installed socat:</p>
<pre><code>sudo yum install -y socat
</code></pre>
<p>By default haproxy it’s already configured to dump stats data to <code>/var/run/haproxy.sock</code>,
now let’s query haproxy get some data from it:</p>
<ul>
<li>Show details like haproxy version, PID, current connections, session rates, tasks, among others.</li>
</ul>
<pre><code>echo "show info" | socat unix-connect:/var/run/haproxy.sock stdio
</code></pre>
<ul>
<li>Echo the stats about all frontents and backends as a csv.</li>
</ul>
<pre><code>echo "show stat" | socat unix-connect:/var/run/haproxy.sock stdio
</code></pre>
<ul>
<li>Display information about errors if there are any.</li>
</ul>
<pre><code>echo "show errors" | socat unix-connect:/var/run/haproxy.sock stdio
</code></pre>
<ul>
<li>Display open sessions.</li>
</ul>
<pre><code>echo "show sess" | socat unix-connect:/var/run/haproxy.sock stdio
</code></pre>
Deployment tips for puppet-tripleo changes2016-09-29T09:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/09/29/tripleo-debugging-tips<p>This post will describe different ways of debugging puppet-tripleo changes.</p>
<h2 id="deploying-puppet-tripleo-using-gerrit-patches-or-source-code-repositories">Deploying puppet-tripleo using gerrit patches or source code repositories</h2>
<p>In some cases, dependencies should be merged first in order to test newer
patches when adding new features to THT. With the following procedure, the user
will be able to create the overcloud images using WorkInProgress patches from
gerrit code review without having them merged (for CI testing purposes).</p>
<p>If using third party repos included in the overcloud image, like i.e. the
puppet-tripleo repository, your changes will not be available by default in the
overcloud until you write them in the overcloud image (by default is:
overcloud-full.qcow2)</p>
<p>In order to make <del>quick</del> changes to the overcloud image for testing purposes, you
can:</p>
<p>Export the paths to your submission by following an
<a href="http://tripleo.org/developer/in_progress_review.html">In-Progress review</a>:</p>
<pre><code class="language-bash"> export DIB_INSTALLTYPE_puppet_tripleo=source
export DIB_REPOLOCATION_puppet_tripleo=https://review.openstack.org/openstack/puppet-tripleo
export DIB_REPOREF_puppet_tripleo=refs/changes/25/310725/14
</code></pre>
<p>In order to avoid noise on IRC, it is possible to clone puppet-tripleo and
apply the changes from your github account. In some cases this is particularly
useful as there is no need to update the patchset number.</p>
<pre><code class="language-bash"> export DIB_INSTALLTYPE_puppet_tripleo=source
export DIB_REPOLOCATION_puppet_tripleo=https://github.com/<usergoeshere>/puppet-tripleo
</code></pre>
<p>Remove previously created images from glance and from the user home folder by:</p>
<pre><code class="language-bash"> rm -rf /home/stack/overcloud-full.*
glance image-delete overcloud-full
glance image-delete overcloud-full-initrd
glance image-delete overcloud-full-vmlinuz
</code></pre>
<p>After this step the images can be recreated by executing:</p>
<pre><code class="language-bash"> ./tripleo-ci/scripts/tripleo.sh --overcloud-images
</code></pre>
<h2 id="debugging-puppet-tripleo-from-overcloud-images">Debugging puppet-tripleo from overcloud images</h2>
<p>For debugging purposes, it is possible to mount the overcloud .qcow2 file:</p>
<pre><code class="language-bash"> #Install the libguest tool:
sudo yum install -y libguestfs-tools
#Create a temp folder to mount the overcloud-full image:
mkdir /tmp/overcloud-full
#Mount the image:
guestmount -a overcloud-full.qcow2 -i --rw /tmp/overcloud-full
#Check and validate all the changes to your overcloud image, go to /tmp/overcloud-full:
# For example, in this step you can go to /opt/puppet-modules/tripleo,
#Umount the image
sudo umount /tmp/overcloud-full
</code></pre>
<p>From the mounted image file it is also possible to run, for testing purposes,
the puppet manifests by using <code>puppet apply</code> and including your manifests:</p>
<pre><code class="language-bash"> sudo puppet apply -v --debug --modulepath=/tmp/overcloud-full/opt/stack/puppet-modules -e "include ::tripleo::services::time::ntp"
</code></pre>
Debugging submissions errors in TripleO CI2016-08-25T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/08/25/debugging-submissions-errors-in-tripleo-ci<p>Landing upstream submissions might be hard if you are
not passing all the CI jobs that try to check that your
code actually works.</p>
<p>Let’s assume that CI is working properly without any kind of
infra issue or without any error introduced by mistake from
other submissions. In which case, we might ending having something
like:</p>
<p><img src="/static/gerrit_failed_jobs.png" alt="" /></p>
<p>The first thing that we should do it’s to double check the
status from all the other jobs that are actually in the TripleO
CI status page. This can be checked in the following site:</p>
<p><a href="http://tripleo.org/cistatus.html">http://tripleo.org/cistatus.html</a></p>
<p>Also, we can get the jobs status by checking the Zuul dashboard.</p>
<p><a href="http://status.openstack.org/zuul/]">http://status.openstack.org/zuul/</a></p>
<p>Or checking the TripleO test cloud nodepool.</p>
<p><a href="http://grafana.openstack.org/dashboard/db/nodepool-tripleo-test-cloud">http://grafana.openstack.org/dashboard/db/nodepool-tripleo-test-cloud</a></p>
<p>After checking that there are jobs passing CI let’s check why our job
it’s not passing correctly.</p>
<p>For each job the folder structure should be similar to:</p>
<pre><code>[TXT] console.html
[DIR] logs/
[DIR] overcloud-cephstorage-0/
[DIR] overcloud-controller-0/
[DIR] overcloud-novacompute-0/
[ ] postci.txt.gz
[DIR] undercloud/
</code></pre>
<p>It’s possible to check the deployment status in the <code>console.html</code> file
there you will see the result of all the deployment steps executed in
order to pass the CI job.</p>
<p>In case of having i.e. a failed deployment you can check <code>postci.txt.gz</code>
to get the actual standard error from the deployment.</p>
<p>Also from folders <code>overcloud-cephstorage-0</code>, <code>overcloud-controller-0</code> and
<code>overcloud-novacompute-0</code> you will have the content of <code>/var</code> that
will point out all the services logs.</p>
<p>Other useful tip might be to get all the job logs folder with wget and
crawl for a string containing the <code>Error</code> word.</p>
<pre><code class="language-bash">#Get the CI job folder i.e. using the following URL.
wget -e robots=off -r --no-parent http://logs.openstack.org/00/000000/0/check-tripleo/gate-tripleo-ci-centos-7-ovb-ha/xxxxxx/
#Parse:
grep -iR "Error: " *
</code></pre>
<p>You will probably see there something pointing out an error, and hopefully
will give you clues about the next steps to fix them and land your submissions
as soon as possible.</p>
BAND-AID for OOM issues with TripleO manual deployments2016-08-23T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/08/23/oom-swap-fix-in-tripleo<p>This post will explain how to fix OOM issues whe using TripleO.</p>
<p><img src="/static/bandaid.jpg" alt="" /></p>
<p>If running <code>free -m</code> from your Undercloud or Overcloud nodes and
getting some output like:</p>
<pre><code>[asdf@fdsa]$ free -m
total used free shared buff/cache available
Mem: 7668 5555 219 1065 1893 663
</code></pre>
<p>And as in the example there is no reference pointing
to the swap memory size and/or usage, you might not be using swap
in your TripleO deployments, to enable it, just have
to follow two steps.</p>
<p>First in the Undercloud, when deploying stacks you might find
that heat-engine (4 workers) takes lot of RAM, in this
case for specific usage peaks can be useful to have a
swap file. In order to have this swap file enabled and used by the OS
execute the following instructions in the Undercloud:</p>
<pre><code class="language-bash">#Add a 4GB swap file to the Undercloud
sudo dd if=/dev/zero of=/swapfile bs=1024 count=4194304
sudo mkswap /swapfile
#Turn ON the swap file
sudo chmod 600 /swapfile
sudo swapon /swapfile
#Enable it on start
echo "/swapfile swap swap defaults 0 0" | sudo tee -a /etc/fstab
</code></pre>
<p>Also when deploying the Overcloud nodes the controller might face
some RAM usage peaks, in which case, create a swap file in each
Overcloud node by using an already existing “extraconfig swap”
template.</p>
<p>To achieve this second part, we just need to use the environmental
file that loads the swap template in the resource
<a href="https://review.openstack.org/#/c/418273/">registry</a>
when deploying the overcloud.</p>
<p>Now, deploy your Overcloud as usual i.e.:</p>
<pre><code class="language-bash">cd
openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tripleo-heat-templates/environments/enable-swap.yaml \
-e /home/stack/tripleo-heat-templates/environments/puppet-pacemaker.yaml
</code></pre>
<p>Bye bye OOM’s!!!!</p>
TripleO deep dive session #6 (Overcloud - Physical network)2016-08-15T12:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/08/15/tripleo-deep-dive-session-6<p>This is the sixth video from a series of “Deep Dive” sessions
related to <a href="http://www.tripleo.org/">TripleO</a> deployments.</p>
<p>In this session Dan Prince will dig into the physical overcloud networks.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=zYNq2uT9pfM">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/zYNq2uT9pfM" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
TripleO deep dive session #5 (Undercloud - Under the hood)2016-08-05T20:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/08/05/tripleo-deep-dive-session-5<p>This is the fifth video from a series of “Deep Dive” sessions
related to <a href="http://www.tripleo.org/">TripleO</a> deployments.</p>
<p>In this session James Slagle and Steven Hardy will dig into
some underlying aspects related to the TripleO Undercloud.</p>
<p>This video session aims to cover the following sections:</p>
<ul>
<li>What is under the hood of a TripleO underloud deployment.</li>
<li>Description of the undercloud components.</li>
<li>Show the undercloud components interaction.</li>
<li>Undercloud installing process.</li>
<li>Undercloud customization.</li>
<li>How to apply and test submissions in instack-undercloud.</li>
</ul>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=h32z6Nq8Byg">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/h32z6Nq8Byg" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
TripleO deep dive session #4 (Puppet modules)2016-08-01T10:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/08/01/tripleo-deep-dive-session-4<p>This is the fourth video from a series of “Deep Dive” sessions
related to <a href="http://www.tripleo.org/">TripleO</a> deployments.</p>
<p>This session will cover a series of basic Puppet topics related to
TripleO deployments.</p>
<p>This video session aims to cover the following sections:</p>
<ul>
<li>Introduction about Puppet OpenStack modules.</li>
<li>Services deployment using Puppet profiles.</li>
<li>Deployment composability with Heat.</li>
<li>Bring your own service to TripleO.</li>
</ul>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=-b4cdfzvFDY">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/-b4cdfzvFDY)" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
Testing instack-undercloud submissions locally2016-07-26T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/07/26/testing-tripleo-undercloud-gerrit-submission<p>This post is to describe how to run/test gerrit submissions related to instack-undercloud locally.</p>
<p>For this example I’m going to use this submission: https://review.openstack.org/#/c/347389/</p>
<p>The follwing steps allow to test the submissions related to instack-undercloud
in a working environment.</p>
<pre><code class="language-bash"> ./tripleo-ci/scripts/tripleo.sh --delorean-setup
./tripleo-ci/scripts/tripleo.sh --delorean-build openstack/instack-undercloud
cd tripleo/instack-undercloud/
#The submission to be tested
git review -d 347389
cd
./tripleo-ci/scripts/tripleo.sh --delorean-build openstack/instack-undercloud
rpm -qa | grep instack-undercloud
sudo rpm -e --nodeps <old_installed_instack-undercloud>
find tripleo/ -name "*rpm"
sudo rpm -iv --replacepkgs --force <located package>
#Here we need to check that the changes were actually applied.
#What I'm used to do it's to search the updated files using locate
#and manually checking that the changes are OK.
sudo rm -rf /root/.cache/image-create/source-repositories/*
sudo rm -rf /opt/stack/puppet-modules
</code></pre>
<p>Now, in case that a puppet-tripleo change is needed, you can add the env. vars before
re-installing the undercloud.</p>
<pre><code class="language-bash"> export DIB_INSTALLTYPE_puppet_tripleo=source
export DIB_REPOLOCATION_puppet_tripleo=https://review.openstack.org/openstack/puppet-tripleo
export DIB_REPOREF_puppet_tripleo=refs/changes/XX/XXXXX/X
</code></pre>
<p>Now, we just need to run the installer.</p>
<pre><code class="language-bash"> ./tripleo-ci/scripts/tripleo.sh --undercloud
</code></pre>
<p>Once this process completes, the output should be something similar to:</p>
<pre><code class="language-text">
#################
tripleo.sh -- Undercloud install - DONE.
#################
</code></pre>
TripleO deep dive session #3 (Overcloud deployment debugging)2016-07-22T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/07/22/tripleo-deep-dive-session-3<p>This is the third video from a series of “Deep Dive” sessions
related to <a href="http://www.tripleo.org/">TripleO</a> deployments.</p>
<p>This session is related to how to troubleshoot a
failed THT deployment.</p>
<p>This video session aims to cover the following topics:</p>
<ul>
<li>Debug a TripleO failed overcloud deployment.</li>
<li>Debugging in real time the deployed resources.</li>
<li>Basic Openstack commands to see the deployment status.</li>
</ul>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=fspnjD-1DNI">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/fspnjD-1DNI" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
TripleO deep dive session #2 (TripleO Heat Templates)2016-07-18T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/07/18/tripleo-deep-dive-session-2<p>This is the second video from a series of “Deep Dive” sessions
related to <a href="http://www.tripleo.org/">TripleO</a> deployments.</p>
<p>This session is related to a THT overview
for all users who want to dig into the
project.</p>
<p>This video session aims to cover the following topics:</p>
<ul>
<li>A THT basic introduction overview.</li>
<li>A Template model used.</li>
<li>A description of the new composable services approach.</li>
<li>A code overview over the related code repositories.</li>
<li>A cloud deployment demo session.</li>
<li>A demo session with a deployment in live referring to debugging hints.</li>
</ul>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=gX5AKSqRCiU">session</a>
content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/gX5AKSqRCiU" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
TripleO deep dive session #1 (Quickstart deployment)2016-07-11T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/07/11/tripleo-deep-dive-session-1<p>This is the first video from a series of “Deep Dive” sessions
related to <a href="http://www.tripleo.org/">TripleO</a> deployments.</p>
<p>The first session is related to the TripleO deployment using
Quickstart.</p>
<p>Quickstart comes from <a href="http://www.rdoproject.org/">RDO</a>, to reduce the complexity of having
a TripleO environment quickly, mostly for users without a strong
and deep knowledge of TripleO configuration and it uses Ansible roles
to automate all the different configuration tasks.</p>
<p>So please, check the full <a href="https://www.youtube.com/watch?v=E1d_RmysnA8">session</a> content on the <a href="https://www.youtube.com/channel/UCNGDxZGwUELpgaBoLvABsTA/">TripleO YouTube channel</a>.</p>
<div class="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/E1d_RmysnA8" frameborder="0" allowfullscreen=""></iframe>
</div>
<p>Last but not least, James Slagle (slagle) have posted some comments about
how to apply new changes in the puppet modules when deploying the overcloud
as the current task of re-create them is a time consuming and cumbersome process.</p>
<p>Using the upload-puppet-modules script we will be able to update the puppet
modules when executing the overcloud deployment.</p>
<pre><code class="language-bash"># From the undercloud
mkdir puppet-modules
cd puppet-modules
git clone https://git.openstack.org/openstack/puppet-tripleo tripleo
# Edit as needed under the tripleo folder
cd
git clone https://git.openstack.org/openstack/tripleo-common
export PATH="$PATH:tripleo-common/scripts"
upload-puppet-modules --directory puppet-modules/
</code></pre>
<p>Please check the <a href="http://www.pubstack.com/blog/2017/06/15/tripleo-deep-dive-session-index.html">sessions index</a> to have access to all available content.</p>
<pre><code class="language-text">---------------------------------------------------------------------------------------
| , . , |
| )-_'''_-( |
| ./ o\ /o \. |
| . \__/ \__/ . |
| ... V ... |
| ... - - - ... |
| . - - . |
| `-.....-´ |
| _______ _ _ ____ |
| |__ __| (_) | | / __ \ |
| | |_ __ _ _ __ | | ___| | | | |
| | | '__| | '_ \| |/ _ \ | | | |
| | | | | | |_) | | __/ |__| | |
| _____ |_|_| |_| .__/|_|\___|\____/ |
| | __ \ | | | __ \(_) |
| | | | | ___ ___|_|__ | | | |___ _____ |
| | | | |/ _ \/ _ \ '_ \ | | | | \ \ / / _ \ |
| | |__| | __/ __/ |_) | | |__| | |\ V / __/ |
| |_____/ \___|\___| .__/_ |_____/|_| \_/ \___| |
| | | (_) |
| ___ ___ __|_|__ _ ___ _ __ ___ |
| / __|/ _ \/ __/ __| |/ _ \| '_ \/ __| |
| \__ \ __/\__ \__ \ | (_) | | | \__ \ |
| |___/\___||___/___/_|\___/|_| |_|___/ |
| |
---------------------------------------------------------------------------------------
</code></pre>
Openstack & TripleO deployment using Inlunch - DEPRECATED2016-07-07T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/07/07/tripleo-deployment-with-inluch<p>Today I’m going to speak about the first
Openstack installer I used to deploy <a href="http://www.tripleo.org">TripleO</a>.
<a href="https://github.com/jistr/inlunch">Inlunch</a>,
as its name aims it should make you
“Get an Instack environment prepared for you while
you head out for lunch.”</p>
<p>The steps that I’m used to run are:</p>
<ul>
<li>Connect to your remote server (Your physical server) as
root and generate the id_rsa.pub file and append it to
the authorized_keys file.</li>
</ul>
<pre><code class="language-bash">ssh-keygen -t rsa
cd .ssh
cat id_rsa.pub >> authorized_keys
</code></pre>
<ul>
<li>Install some dependencies and clone <a href="https://github.com/jistr/inlunch">Inlunch</a>.</li>
</ul>
<pre><code class="language-bash">rpm -iUvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-7.noarch.rpm
sudo yum -y install git ansible nano
git clone https://github.com/jistr/inlunch
</code></pre>
<ul>
<li>Go to the inlunch folder and edit the answer
file to fits your needs. In the answers files
by default it creates 6 nodes with 5GB RAM each,
I usually change this to 3 nodes with 8GB RAM.</li>
</ul>
<pre><code class="language-bash">cd inlunch
vi answers.yml.example
</code></pre>
<ul>
<li>The last but not least <a href="https://github.com/jistr/inlunch">Inlunch</a>
step it is to deploy
our undercloud!!! As simple as it sounds. As you can
see <a href="https://github.com/jistr/inlunch">Inlunch</a> uses
<a href="http://www.ansible.com/">Ansible</a> for all the steps automation
using SSH.
In this case we added the root public key to the same server
and the installation is pointed to localhost.</li>
</ul>
<pre><code class="language-bash">INLUNCH_ANSWERS=answers.yml.example INLUNCH_FQDN=localhost ./instack-virt.sh
</code></pre>
<ul>
<li>Once you have finished this last step you can
login in the undercloud node by sshing to the physical
server using the 2200 port.</li>
</ul>
<pre><code class="language-bash">ssh -p 2200 root@<your_server_fqdn_goes_here>
</code></pre>
<p>This is it :) your undercloud it is up and running.</p>
<p>Now, I will show the following steps to deploy
the master branch of tripleo-heat-templates to
finish the overcloud deployment.</p>
<ul>
<li>Login as the stack user and source the stackrc file</li>
</ul>
<pre><code class="language-bash">su - stack
source stackrc
</code></pre>
<ul>
<li>Let’s clone all needed repositories.</li>
</ul>
<pre><code class="language-bash">git clone https://github.com/openstack/puppet-tripleo
git clone https://github.com/openstack/tripleo-docs
git clone https://github.com/openstack/tripleo-heat-templates
</code></pre>
<ul>
<li>And to finish let’s deploy the <a href="http://www.tripleo.org">TripleO</a> pacemaker environment.</li>
</ul>
<pre><code class="language-bash"> openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server clock.redhat.com \
--templates /home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tripleo-heat-templates/environments/puppet-pacemaker.yaml
</code></pre>
<p>Now you should have deployed successfully your undercloud/overcloud environment using <a href="https://github.com/jistr/inlunch">Inlunch</a>.</p>
<p>Thanks <a href="https://github.com/jistr/">Jiri</a> for this amazing installer!!</p>
TripleO manual deployment - DEPRECATED2016-07-04T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/07/04/manually-installing-tripleo-recipe<p>This is a brief recipe about how to
manually install TripleO in a remote
32GB RAM box.</p>
<p>From the hypervisor run:</p>
<pre><code class="language-bash"> #In this dev. env. /var is only 50GB, so I will create
#a sym link to another location with more capacity.
#It will take easily more tan 50GB deploying a 3+1 overcloud
sudo mkdir -p /home/libvirt/
sudo ln -sf /home/libvirt/ /var/lib/libvirt
#Add default stack user
sudo useradd stack
echo "stack:stack" | chpasswd
echo "stack ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/stack
sudo chmod 0440 /etc/sudoers.d/stack
su - stack
sudo yum -y install epel-release
sudo yum -y install yum-plugin-priorities
export TRIPLEO_ROOT=/home/stack
export TRIPLEO_RELEASE=rdo-trunk-master-tripleo
#export TRIPLEO_RELEASE=rdo-trunk-newton-tested
export TRIPLEO_RELEASE_DEPS=centos7
#export TRIPLEO_RELEASE_DEPS=centos7-newton
#Repository configured pointing to above release!
sudo curl -o /etc/yum.repos.d/delorean.repo https://buildlogs.centos.org/centos/7/cloud/x86_64/$TRIPLEO_RELEASE/delorean.repo
sudo curl -o /etc/yum.repos.d/delorean-deps.repo https://trunk.rdoproject.org/$TRIPLEO_RELEASE_DEPS/delorean-deps.repo
#Configure the undercloud deployment
export NODE_DIST=centos7
export NODE_CPU=4
export NODE_MEM=9000
export NODE_COUNT=6
export UNDERCLOUD_NODE_CPU=4
export UNDERCLOUD_NODE_MEM=9000
export FS_TYPE=ext4
sudo yum install -y instack-undercloud
instack-virt-setup
</code></pre>
<p>In the hypervisor run the following command to log-in in
the undercloud:</p>
<pre><code class="language-bash"> ssh root@`sudo virsh domifaddr instack | grep $(tripleo get-vm-mac instack) | awk '{print $4}' | sed 's/\/.*$//'`
</code></pre>
<p>From the undercloud we will install all the
packages:</p>
<pre><code class="language-bash"> #Add a 4GB swap file to the Undercloud
sudo dd if=/dev/zero of=/swapfile bs=1024 count=4194304
sudo mkswap /swapfile
#Turn ON the swap file
sudo chmod 600 /swapfile
sudo swapon /swapfile
#Enable it on start
sudo echo "/swapfile swap swap defaults 0 0" >> /etc/fstab
#Login as the stack user
su - stack
export TRIPLEO_ROOT=/home/stack
sudo yum -y install yum-plugin-priorities
export TRIPLEO_RELEASE=rdo-trunk-master-tripleo
#export TRIPLEO_RELEASE=rdo-trunk-newton-tested
export TRIPLEO_RELEASE_BRANCH=master
#export TRIPLEO_RELEASE_BRANCH=stable/newton
export USE_DELOREAN_TRUNK=1
export DELOREAN_TRUNK_REPO="https://buildlogs.centos.org/centos/7/cloud/x86_64/$TRIPLEO_RELEASE/"
export DELOREAN_REPO_FILE="delorean.repo"
export FS_TYPE=ext4
git clone -b $TRIPLEO_RELEASE_BRANCH https://github.com/openstack/tripleo-heat-templates
git clone https://github.com/openstack-infra/tripleo-ci.git
./tripleo-ci/scripts/tripleo.sh --all
# The last command will execute:
# repo_setup --repo-setup
# undercloud --undercloud
# overcloud_images --overcloud-images
# register_nodes --register-nodes
# introspect_nodes --introspect-nodes
# overcloud_deploy --overcloud-deploy
</code></pre>
<p>Once the undercloud it is fully installed, deploy an overcloud
(The last command should have created an overcloud, this is
needed if you need to deploy another one).</p>
<pre><code class="language-bash">cd
openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tripleo-heat-templates/environments/puppet-pacemaker.yaml
#Also can be added:
#--control-scale 3 \
#--compute-scale 3 \
#--ceph-storage-scale 1 -e /home/stack/tripleo-heat-templates/environments/storage-environment.yaml
</code></pre>
<p>This will hopefully deploy the TripleO overcloud, if not,
refer to the <a href="http://tripleo.org/troubleshooting/troubleshooting.html">troubleshooting</a> section in the official
site.</p>
<pre><code class="language-bash">#Configure a DNS for the OC subnet, do this before deploying the Overcloud
neutron subnet-update `neutron subnet-list -f value | awk '{print $1}'` --dns-nameserver 192.168.122.1
</code></pre>
<div style="font-size:10px">
<blockquote>
<p><strong>Updated 2017/02/23:</strong> instack-virt-setup is deprecatred :( moving to tripleo-quickstart.</p>
<p><strong>Updated 2016/11/25:</strong> instack-virt-setup env. vars. are defaulted to sane defaults, so they are optional now.</p>
</blockquote>
</div>
Connecting from your local machine to the TripleO overcloud horizon dashboard2016-07-02T00:00:00+00:00Carlos Camachohttps://www.pubstack.com/blog/2016/07/02/ssh-multi-hop-tripleo<p>This will be my first blog post about TripleO deployments.</p>
<p>The goal of this post is to show how to chain multiple ssh
tunnels to browse into the horizon dashboard, deployed
in a TripleO environment.</p>
<p>In this case, we have deployed <a href="http://www.tripleo.org/">TripleO</a>
in a remote server “labserver” in which was launched an undercloud
and overcloud deployment.</p>
<p>The horizon dashboard listens on the 80 port of the overcloud controller.
From the user terminal we want to have access to the horizon dashboard, currently
unreachable because we don’t have access to the deployed private IPs from the
user’s terminal.</p>
<p>Below is a graphical representation about the described scenario.
<img src="/static/multi-hop.png" alt="" /></p>
<h2 id="steps">STEPS</h2>
<ul>
<li>Connect the local terminal to labserver (create the first tunnel)</li>
</ul>
<pre><code class="language-bash">#Forward incoming 38080 traffic to local 38080 in the hypervisor
#labserver must be a reachable host
ssh -L 38080:localhost:38080 root@labserver
</code></pre>
<ul>
<li>Connect to the undercloud from the labserver (create the second tunnel)</li>
</ul>
<pre><code class="language-bash">#Log-in as the stack user and get the undercloud IP
su - stack
undercloudIp=`sudo virsh domifaddr instack | grep $(tripleo get-vm-mac instack) | awk '{print $4}' | sed 's/\/.*$//'`
#Forward incoming 38080 traffic to local 38080 in the undercloud
ssh -L 38080:localhost:38080 root@$undercloudIp
</code></pre>
<ul>
<li>Get the admin password for the Horizon dashboard</li>
</ul>
<pre><code class="language-bash">su - stack
source stackrc
cat overcloudrc |grep OS_PASSWORD | awk -F '=' '{print $2}'
</code></pre>
<ul>
<li>Connect to the overcloud controller from the undercloud (create the third and last tunnel)</li>
</ul>
<pre><code class="language-bash">#Get the controller IP
controllerIp=`nova list | grep controller | awk -F '|' '{print $7}' | awk -F '=' '{print $2}'`
#Forward incoming 38080 traffic to controller IP in 80 port
ssh -L 38080:"$controllerIp":80 heat-admin@"$controllerIp"
echo "From your browser open: http://localhost:38080/"
</code></pre>
<p>Now, if everything went as expected you should be able to see
the horizon dashboard by using your favorite browser and typing http://localhost:38080/dashboard,<br />
then use the admin user and the password printed before.</p>
<p>Note that in this case, by default the ssh hops should be done using different users.</p>