Planet RDO

March 23, 2020

RDO Blog

Tips, Tricks, and Best Practices for Distributed RDO Teams

While a lot of RDO contributors are remote, there are many more who are not and now find themselves in lock down or working from home due to the coronavirus. A few members of the RDO community requested tips, tricks, and best practices for working on and managing a distributed team.

Connectivity

I mean, obviously, there needs to be enough bandwidth, which might normally be just fine, but if you have a partner and kids also using the internet, video calls might become impossible.

Communicate with the family to work out a schedule or join the call without video so you can still participate.

Manage Expectations

Even if you’re used to being remote AND don’t have a partner / family invading your space, there is added stress in the new reality.

Be sure to manage expectations with your boss about priorities, focus, goals, project tracking, and mental health.

This will be an ongoing conversation that evolves as projects and situations evolve.

Know Thyself

Some people NEED to get ready in the morning, dress in business clothes, and work in a specific space. Some people can wake up, grab their laptop and work from the bed.

Some people NEED to get up once an hour to walk around the block. Some people are content to take a break once every other hour or more.

Some people NEED to physically be in the office around other people. Some will be totally content to work from home.

Sure, some things aren’t optional, but work with what you can.

Figure out what works for you.

Embrace #PhysicalDistance Not #SocialDistance

Remember to stay connected socially with your colleagues. Schedule a meeting without an agenda where you chat about whatever.

Come find the RDO Technical Community Liaison, leanderthal, and your other favorite collaborators on Freenode IRC on channels #rdo and #tripleo.

For that matter, don’t forget to reach out to your friends and family.

Even introverts need to maintain a certain level of connection.

Further Reading

There’s a ton of information about working remotely / distributed productivity and this is, by no means, an exhaustive list, but to get you started:

Now let’s hear from you!

What tips, tricks, and resources do you recommend to work from home, especially in this time of stress? Please add your advice in the comments below.

And, as always, thank you for being a part of the RDO community!

by Rain Leander at March 23, 2020 03:14 PM

March 20, 2020

Lars Kellogg-Stedman

I see you have the machine that goes ping...

We're all looking for ways to keep ourselves occupied these days, and for me that means leaping at the chance to turn a small problem into a slightly ridiculous electronics project. For reasons that I won't go into here I wanted to generate an alert when a certain WiFi BSSID becomes visible. A simple solution to this problem would have been a few lines of shell script to send me an email…but this article isn't about simple solutions!

March 20, 2020 12:00 AM

March 17, 2020

RDO Blog

Community Blog Round Up 17 March 2020

Oddbit writes two incredible articles – one about configuring passwordless consoles for raspberry pi and another about configuring open vswitch with nmcli while Carlos Camacho publishes Emilien Macchi’s deep dive demo on containerized deployment sans Paunch.

A passwordless serial console for your Raspberry Pi by oddbit

legendre on #raspbian asked:

How can i config rasp lite to open a shell on the serial uart on boot? Params are 1200-8-N-1 Dont want login running, just straight to sh

In this article, we’ll walk through one way of implementing this configuration.

Read more at https://blog.oddbit.com/post/2020-02-24-a-passwordless-serial-console/

TripleO deep dive session #14 (Containerized deployments without paunch) by Carlos Camacho

This is the 14th release of the TripleO “Deep Dive” sessions. Thanks to Emilien Macchi for this deep dive session about the status of the containerized deployment without Paunch.

Read more at https://www.anstack.com/blog/2020/02/18/tripleo-deep-dive-session-14.html

Configuring Open vSwitch with nmcli by oddbit

I recently acquired a managed switch for my home office in order to segment a few devices off onto their own isolated vlan. As part of this, I want to expose these vlans on my desktop using Open vSwitch (OVS), and I wanted to implement the configuration using NetworkManager rather than either relying on the legacy /etc/sysconfig/network-scripts scripts or rolling my own set of services. These are my notes in case I ever have to do this again.

Read more at https://blog.oddbit.com/post/2020-02-15-configuring-open-vswitch-with/

by Rain Leander at March 17, 2020 03:23 PM

February 24, 2020

Lars Kellogg-Stedman

A passwordless serial console for your Raspberry Pi

legendre on #raspbian asked: How can i config rasp lite to open a shell on the serial uart on boot? Params are 1200-8-N-1 Dont want login running, just straight to sh In this article, we'll walk through one way of implementing this configuration. Activate the serial port Raspbian automatically starts a getty on the serial port if one is available. You should see an agetty process associated with your serial port when you run ps -ef.

February 24, 2020 12:00 AM

February 18, 2020

Carlos Camacho

TripleO deep dive session #14 (Containerized deployments without paunch)

This is the 14th release of the TripleO “Deep Dive” sessions

Thanks to Emilien Macchi for this deep dive session about the status of the containerized deployment without Paunch.

You can access the presentation.

So please, check the full session content on the TripleO YouTube channel.



Please check the sessions index to have access to all available content.

by Carlos Camacho at February 18, 2020 12:00 AM

February 15, 2020

Lars Kellogg-Stedman

Configuring Open vSwitch with nmcli

I recently acquired a managed switch for my home office in order to segment a few devices off onto their own isolated vlan. As part of this, I want to expose these vlans on my desktop using Open vSwitch (OVS), and I wanted to implement the configuration using NetworkManager rather than either relying on the legacy /etc/sysconfig/network-scripts scripts or rolling my own set of services. These are my notes in case I ever have to do this again.

February 15, 2020 12:00 AM

February 05, 2020

RDO Blog

Migration Paths for RDO From CentOS 7 to 8

In last CentOS Dojo, it was asked if RDO would provide python3 packages for OpenStack Ussuri on CentOS7 and if it would be “possible” in the context of helping in the upgrade path from Train to Ussuri. As “possible” is a vague term and I think the response deserves some more explanation than a binary one, I’ve collected my thoughts in this topic as a way to start a discussion within the RDO community.

Yes, upgrades are hard

We all know that upgrading production OpenStack cloud is complex and depends strongly on each specific layout and deployment tools (different deployment tools may support or not the OpenStack upgrades) and processes. In addition, upgrading from CentOS 7 to 8 requires OS redeploy, which introduces operational complexity to the migration. We are commited to help the RDO community users to migrate their clouds to new versions of OpenStack and/or Operating Systems in different ways:
  • Providing RDO Train packages on CentOS8. This allows users to choose between doing a one-step upgrade from CentOS7/Train -> CentOS8/Ussuri or split it in two steps CentOS7/Train -> CentOS8/Train -> CentOS8/Ussuri.
  • RDO maintains OpenStack packages during the whole upstream maintenance cycle for the Train release, this is until April 2021. Operators can take some time to plan and execute their migration paths.
Also the Rolling Upgrades features provided in OpenStack allows one to keep agents running in compute nodes in Train temporarily after the controllers have been updated to Ussuri using Upgrade Levels in Nova or built-in backwards compatibility features in Neutron and other services.

What “Supporting a OpenStack release in a CentOS version” means in RDO

Before discussing the limitations and challenges to support RDO Ussuri on CentOS 7.7 using python 3, I’ll describe what supporting a new RDO release means:

Build

  • Before we can start building OpenStack packages we need to have all required dependencies used to build or run OpenStack services. We use the libraries from CentOS base repos as much as we can and avoid rebasing or forking CentOS base packages unless it’s strongly justified.
  • OpenStack packages are built using DLRN in RDO Trunk repos or CBS using jobs running in post pipeline in review.rdoproject.org.
  • RDO also consumes packages from other CentOS SIGs as Ceph from Storage SIG, KVM from Virtualization or collectd from OpsTools.

Validate

  • We run CI jobs periodically to validate the packages provided in the repos. These jobs are executed using the Zuul instance in SoftwareFactory project or Jenkins in CentOS CI infra and deploy different configurations of OpenStack using Packstack, puppet-openstack-integration and TripleO.
  • Also, some upstream projects include CI jobs on CentOS using the RDO packages to gate every change on it.

Publish

  • RDO Trunk packages are published in https://trunk.rdoproject.org and validated repositories are moved to promoted links.
  • RDO CloudSIG packages are published in official CentOS mirrors after they are validated by CI jobs.

Challenges to provide python 3 packages for RDO Ussuri in CentOS 7

Build

  • While CentOS 7 includes a quite wide set of python 2 modules (150+) in addition to the interpreter, the python 3 stack included in CentOS 7.7 is just the python interpreter and ~5 python modules. All the missing ones would need to be bootstraped for python3.
  • Some python bindings are provided as part of other builds, i.e. python-rbd or python-rados is part of Ceph in StorageSIG, python-libguestfs is part of libguestfs in base repo, etc… RDO doesn’t own those packages so commitment from the owners would be needed or RDO would need to take ownership of them in this specific release (which means maintaining them until Train EOL).
  • Current specs in Ussuri tie python version to CentOS version. We’d need to figure out a way to switch python version in CentOS 7 via tooling configuration and macros.

Validate

  • In order to validate the python3 builds for Ussuri on CentOS 7, the deployment tools (puppet-openstack, packstack, kolla and TripleO) would need upstream fixes to install python3 packages instead of python2 for CentOS 7. Ideally, new CI jobs should be added with this configuration to gate changes in those repositores. This would require support from the upstream communities.

Conclusion

  • Alternatives exist to help operators in the migration path from Train on CentOS 7 to Ussuri on CentOS 8 and avoid a massive full cloud reboot.
  • Doing a full supported RDO release of Ussuri on CentOS 7 would require a big effort in RDO and other projects that can’t be done with existing resources:
    • It would required a full bootstrap of python3 dependencies which are pulled from CentOS base repositoris in python 2.
    • Other SIGs would need to provide python3 packages or, alternatively, RDO would need to maintain them for this specific release.
    • In order to validate the release upstream deployment projects would need to support this new python3 Train release.
  • There may be chances for intermediate solutions limited to a reduced set of packages that would help in the transition period. We’d need to hear details from the interested community members about what would be actually needed and what’s the desired migration workflow. We will be happy to onboard new community members with interest in contributing to this effort.
We are open to listen and discuss what other options may help the users, come to us and let us know how we can do it.

by amoralej at February 05, 2020 02:04 PM

January 23, 2020

Lars Kellogg-Stedman

How long is a cold spell in Boston?

We've had some wacky weather recently. In the space of a week, the temperature went from a high of about 75°F to a low around 15°F. This got me to thinking about what constitutes “normal” weather here in the Boston area, and in particular, how common it is to have a string of consecutive days in which the high temperature stays below freezing. While this was an interesting question in itself, it also seemed like a great opportunity to learn a little about Pandas, the Python data analysis framework.

January 23, 2020 12:00 AM

January 22, 2020

RDO Blog

Community Blog Round Up 20 January 2020

We’re super chuffed to see another THREE posts from our illustrious community – Adam Young talks about api port failure and speed bumps while Lars explores literate programming.

Shift on Stack: api_port failure by Adam Young

I finally got a right-sized flavor for an OpenShift deployment: 25 GB Disk, 4 VCPU, 16 GB Ram. With that, I tore down the old cluster and tried to redeploy. Right now, the deploy is failing at the stage of the controller nodes querying the API port. What is going on?

Read more at https://adam.younglogic.com/2020/01/shift-on-stack-api_port-failure/

Self Service Speedbumps by Adam Young

The OpenShift installer is fairly specific in what it requires, and will not install into a virtual machine that does not have sufficient resources. These limits are 16 GB RAM, 4 Virtual CPUs, and 25 GB Disk Space. This is fairly frustrating if your cloud provider does not give you a flavor that matches this. The last item specifically is an artificial limitation as you can always create an additional disk and mount it, but the installer does not know to do that.

Read more at https://adam.younglogic.com/2020/01/self-service-speedbumps/

Snarl: A tool for literate blogging by Lars Kellogg-Stedman

Literate programming is a programming paradigm introduced by Donald Knuth in which a program is combined with its documentation to form a single document. Tools are then used to extract the documentation for viewing or typesetting or to extract the program code so it can be compiled and/or run. While I have never been very enthusiastic about literate programming as a development methodology, I was recently inspired to explore these ideas as they relate to the sort of technical writing I do for this blog.

Read more at https://blog.oddbit.com/post/2020-01-15-snarl-a-tool-for-literate-blog/

by Rain Leander at January 22, 2020 09:09 PM

January 19, 2020

Adam Young

Shift on Stack: api_port failure

I finally got a right-sized flavor for an OpenShift deployment: 25 GB Disk, 4 VCPU, 16 GB Ram. With that, I tore down the old cluster and tried to redeploy. Right now, the deploy is failing at the stage of the controller nodes querying the API port. What is going on?

Here is the reported error on the console:

The IP address of 10.0.0.5 is attached to the following port:

$ openstack port list | grep "0.0.5"
| da4e74b5-7ab0-4961-a09f-8d3492c441d4 | demo-2tlt4-api-port       | fa:16:3e:b6:ed:f8 | ip_address='10.0.0.5', subnet_id='50a5dc8e-bc79-421b-aa53-31ddcb5cf694'      | DOWN   |

That final “DOWN” is the port state. It is also showing as detached. It is on the internal network:

Looking at the installer code, the one place I can find a reference to the api_port is in the template data/data/openstack/topology/private-network.tf used to build the value openstack_networking_port_v2. This value is used quite heavily in the rest of the installers’ Go code.

Looking in the terraform data built by the installer, I can find references to both the api_port and openstack_networking_port_v2. Specifically, there are several object of type openstack_networking_port_v2 with the names:

$ cat moc/terraform.tfstate  | jq -jr '.resources[] | select( .type == "openstack_networking_port_v2") | .name, ", ", .module, "\n" '
api_port, module.topology
bootstrap_port, module.bootstrap
ingress_port, module.topology
masters, module.topology

On a baremetal install, we need an explicit A record for api-int.<cluster_name>.<base_domain>. That requirement does not exist for OpenStack, however, and I did not have one the last time I installed.

api-int is the internal access to the API server. Since the controllers are hanging trying to talk to it, I assume that we are still at the stage where we are building the control plane, and that it should be pointing at the bootstrap server. However, since the port above is detached, traffic cannot get there. There are a few hypotheses in my head right now:

  1. The port should be attached to the bootstrap device
  2. The port should be attached to a load balancer
  3. The port should be attached to something that is acting like a load balancer.

I’m leaning toward 3 right now.

The install-config.yaml has the line:
octaviaSupport: “1”

But I don’t think any Octavia resources are being used.

$ openstack loadbalancer pool list

$ openstack loadbalancer list

$ openstack loadbalancer flavor list
Not Found (HTTP 404) (Request-ID: req-fcf2709a-c792-42f7-b711-826e8bfa1b11)

by Adam Young at January 19, 2020 12:55 AM

January 15, 2020

Adam Young

Self Service Speedbumps

The OpenShift installer is fairly specific in what it requires, and will not install into a virtual machine that does not have sufficient resources. These limits are:

  • 16 GB RAM
  • 4 Virtual CPUs
  • 25 GB Disk Space

This is fairly frustrating if your cloud provider does not give you a flavor that matches this. The last item specifically is an artificial limitation as you can always create an additional disk and mount it, but the installer does not know to do that.

In my case, there is a flavor that almost matches; it has 10 GB of Disk space instead of the required 25. But I cannot use it.

Instead, I have to use a larger flavor that has double the VCPUs, and thus eats up more of my VCPU quota….to the point that I cannot afford more than 4 Virtual machines of this size, and thus cannot create more than one compute node; OpenShift needs 3 nodes for the control plane.

I do not have permissions to create a flavor on this cloud. Thus, my only option is to open a ticket. Which has to be reviewed and acted upon by an administrator. Not a huge deal.

This is how self service breaks down. A non-security decision (link disk size with the other characteristics of a flavor) plus Access Control rules that prevent end users from customizing. So the end user waits for a human to respond

In my case, that means that I have to provide an alternative place to host my demonstration, just in case things don’t happen in time. Which costs my organization money.

This is not a ding on my cloud provider. They have the same OpenStack API as anyone else deploying OpenStack.

This is not a ding on Keystone; create flavor is not a project scoped operation, so I can’t even blame my favorite bug.

This is not a ding on the Nova API. It is reasonable to reserve the ability to create Flavors to system administrators. If instances have storage attached, to provide it in reasonable sized chunks.

My problem just falls at the junction of several different zones of responsibility. It is the overlap that causes the pain in this case. This is not unusual

Would it be possible to have a more granular API, like “create customer flavor” that built a flavor out of pre-canned parts and sizes? Probably. That would solve my problem. I don’t know if this is a general problem, though.

This does seem like it is something that could be addressed by a GitOps type approach. In order to perform an operation like this, I should be able to issue a command that gets checked in to git, confirmed, and posted for code review. An administrator could then confirm or provide an alternative approach. This happens in the ticketing system. It is human-resource-intensive. If no one says “yes” the default is no…and thing just sits there.

What would be a better long term solution? I don’t know. I’m going to let this idea set for a while.

What do you think?

by Adam Young at January 15, 2020 05:18 PM

Lars Kellogg-Stedman

Snarl: A tool for literate blogging

Literate programming is a programming paradigm introduced by Donald Knuth in which a program is combined with its documentation to form a single document. Tools are then used to extract the documentation for viewing or typesetting or to extract the program code so it can be compiled and/or run. While I have never been very enthusiastic about literate programming as a development methodology, I was recently inspired to explore these ideas as they relate to the sort of technical writing I do for this blog.

January 15, 2020 12:00 AM

January 06, 2020

RDO Blog

Community Blog Round Up 06 January 2020

Welcome to the new DECADE! It was super awesome to run the blog script and see not one, not two, but THREE new articles by the amazing Adam Young who tinkered with Keystone, TripleO, and containers over the break. And while Lars only wrote one article, it’s the ultimate guide to the Open Virtual Network within OpenStack. Sit back, relax, and inhale four great articles from the RDO Community.

Running the TripleO Keystone Container in OpenShift by Adam Young

Now that I can run the TripleO version of Keystone via podman, I want to try running it in OpenShift.

Read more at https://adam.younglogic.com/2019/12/running-the-tripleo-keystone-container-in-openshift/

Official TripleO Keystone Images by Adam Young

My recent forays into running containerized Keystone images have been based on a Centos base image with RPMs installed on top of it. But TripleO does not run this way; it runs via containers. Some notes as I look into them.

Read more at https://adam.younglogic.com/2019/12/official-tripleo-keystone-images/

OVN and DHCP: A minimal example by Lars Kellogg-Stedman

Introduction A long time ago, I wrote an article all about OpenStack Neutron (which at that time was called Quantum). That served as an excellent reference for a number of years, but if you’ve deployed a recent version of OpenStack you may have noticed that the network architecture looks completely different. The network namespaces previously used to implement routers and dhcp servers are gone (along with iptables rules and other features), and have been replaced by OVN (“Open Virtual Network”).

Read more at https://blog.oddbit.com/post/2019-12-19-ovn-and-dhcp/

keystone-db-init in OpenShift by Adam Young

Before I can run Keystone in a container, I need to initialize the database. This is as true for running in Kubernetes as it was using podman. Here’s how I got keystone-db-init to work.

Read more at https://adam.younglogic.com/2019/12/keystone-db-init-in-openshift/

by Rain Leander at January 06, 2020 12:52 PM

December 21, 2019

Adam Young

Running the TripleO Keystone Container in OpenShift

Now that I can run the TripleO version of Keystone via podman, I want to try running it in OpenShift.

Here is my first hack at a deployment yaml. Note that it looks really similar to the keystone-db-init I got to run the other day.

If I run it with:

oc create -f keystone-pod.yaml

I get a CrashLoopBackoff error, with the following from the logs:

$ oc logs pod/keystone-api 
 sudo -E kolla_set_configs
 sudo: unable to send audit message: Operation not permitted
 INFO:main:Loading config file at /var/lib/kolla/config_files/config.json
 ERROR:main:Unexpected error:
 Traceback (most recent call last):
 File "/usr/local/bin/kolla_set_configs", line 412, in main
 config = load_config()
 File "/usr/local/bin/kolla_set_configs", line 294, in load_config
 config = load_from_file()
 File "/usr/local/bin/kolla_set_configs", line 282, in load_from_file
 with open(config_file) as f:
 IOError: [Errno 2] No such file or directory: '/var/lib/kolla/config_files/config.json' 

I modified the config.json to remove steps that were messing me up. I think I can now remove evn that last config file, but I left it for now.

{
   "command": "/usr/sbin/httpd",
   "config_files": [
        {  
              "source": "/var/lib/kolla/config_files/src/*",
              "dest": "/",
              "merge": true,
              "preserve_properties": true
        }
    ],
    "permissions": [
	    {
            "path": "/var/log/kolla/keystone",
            "owner": "keystone:keystone",
            "recurse": true
        }
    ]
}

I need to add the additional files to a config map and mount those inside the container. For example, I can create a config map with the config.json file, a secret for the Fernet key, and a config map for the apache files.

oc create configmap keystone-files --from-file=config.json=./config.json
kubectl create secret generic keystone-fernet-key --from-file=../kolla/src/etc/keystone/fernet-keys/0
oc create configmap keystone-httpd-files --from-file=wsgi-keystone.conf=../kolla/src/etc/httpd/conf.d/wsgi-keystone.conf

Here is my final pod definition

apiVersion: v1
kind: Pod
metadata:
  name: keystone-api
  labels:
    app: myapp
spec:
  containers:
  - image: docker.io/tripleomaster/centos-binary-keystone:current-tripleo 
    imagePullPolicy: Always
    name: keystone
    env:
    - name: KOLLA_CONFIG_FILE
      value: "/var/lib/kolla/config_files/src/config.json"
    - name: KOLLA_CONFIG_STRATEGY
      value: "COPY_ONCE"
    volumeMounts:
    - name: keystone-conf
      mountPath: "/etc/keystone/"
    - name: httpd-config
      mountPath: "/etc/httpd/conf.d"
    - name: config-json
      mountPath: "/var/lib/kolla/config_files/src"

    - name: keystone-fernet-key
      mountPath: "/etc/keystone/fernet-keys/0"
  volumes:
  - name: keystone-conf
    secret:
      secretName: keystone-conf
      items:
      - key: keystone.conf
        path: keystone.conf
        mode: 511	
  - name: keystone-fernet-key
    secret:
      secretName: keystone-fernet-key
      items:
      - key: "0"
        path: "0"
        mode: 511	
  - name: config-json
    configMap:
       name: keystone-files
  - name: httpd-config
    configMap:
       name: keystone-httpd-files

And show that it works for basic stuff:

$ oc rsh keystone-api
sh-4.2# curl 10.131.1.98:5000
{"versions": {"values": [{"status": "stable", "updated": "2019-07-19T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.13", "links": [{"href": "http://10.131.1.98:5000/v3/", "rel": "self"}]}]}}curl (HTTP://10.131.1.98:5000/): response: 300, time: 3.314, size: 266

Next steps: expose a route, make sure we can get a token.

by Adam Young at December 21, 2019 12:31 AM

December 19, 2019

Adam Young

Official TripleO Keystone Images

My recent forays into running containerized Keystone images have been based on a Centos base image with RPMs installed on top of it. But TripleO does not run this way; it runs via containers. Some notes as I look into them.

The official containers for TripleO are currently hosted on docker.com. The Keystone page is here:

Don’t expect the docker pull command posted on that page to work. I tried a comparable one with podman and got:

$ podman pull tripleomaster/centos-binary-keystone
Trying to pull docker.io/tripleomaster/centos-binary-keystone...
  manifest unknown: manifest unknown
Trying to pull registry.fedoraproject.org/tripleomaster/centos-binary-keystone...

And a few more lines of error output. Thanks to Emilien M, I was able to get the right command:

$ podman pull tripleomaster/centos-binary-keystone:current-tripleo
Trying to pull docker.io/tripleomaster/centos-binary-keystone:current-tripleo...
Getting image source signatures
...
Copying config 9e85172eba done
Writing manifest to image destination
Storing signatures
9e85172eba10a2648ae7235076ada77b095ed3da05484916381410135cc8884c

Since I did this as a normal account, and not as root, the image does not get stored under /var, but instead goes somewhere under $HOME/.local. If I type

$ podman images
REPOSITORY                                       TAG               IMAGE ID       CREATED        SIZE
docker.io/tripleomaster/centos-binary-keystone   current-tripleo   9e85172eba10   2 days ago     904 MB

I can see the short form of the hash starting with 9e85. I copy that to use to match the subdir under ls /home/ayoung/.local/share/containers/storage/overlay-image

ls /home/ayoung/.local/share/containers/storage/overlay-images/9e85172eba10a2648ae7235076ada77b095ed3da05484916381410135cc8884c/

If I cat that file, I can see all of the layers that make up the image itself.

Trying a naive: podman run docker.io/tripleomaster/centos-binary-keystone:current-tripleo I get an error that shows just how kolla-centric this image is:

$ podman run docker.io/tripleomaster/centos-binary-keystone:current-tripleo
+ sudo -E kolla_set_configs
INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
ERROR:__main__:Unexpected error:
Traceback (most recent call last):
  File "/usr/local/bin/kolla_set_configs", line 412, in main
    config = load_config()
  File "/usr/local/bin/kolla_set_configs", line 294, in load_config
    config = load_from_file()
  File "/usr/local/bin/kolla_set_configs", line 282, in load_from_file
    with open(config_file) as f:
IOError: [Errno 2] No such file or directory: '/var/lib/kolla/config_files/config.json'

So I read the docs. Trying to fake it with:

$ podman run -e KOLLA_CONFIG='{}'   docker.io/tripleomaster/centos-binary-keystone:current-tripleo
+ sudo -E kolla_set_configs
INFO:__main__:Validating config file
ERROR:__main__:InvalidConfig: Config is missing required "command" key

When running with TripleO, The config files are generated from Heat Templates. The values for the config.json come from here.
This gets me slightly closer:

podman run  -e KOLLA_CONFIG_STRATEGY=COPY_ONCE   -e KOLLA_CONFIG='{"command": "/usr/sbin/httpd"}'   docker.io/tripleomaster/centos-binary-keystone:current-tripleo

But I still get an error of “no listening sockets available, shutting down” even if I try this as Root. Below is the whole thing I tried to run.

$ podman run   -v $PWD/fernet-keys:/var/lib/kolla/config_files/src/etc/keystone/fernet-keys   -e KOLLA_CONFIG_STRATEGY=COPY_ONCE   -e KOLLA_CONFIG='{ "command": "/usr/sbin/httpd", "config_files": [ { "source": "/var/lib/kolla/config_files/src/etc/keystone/fernet-keys", "dest": "/etc/keystone/fernet-keys", "owner":"keystone", "merge": false, "perm": "0600" } ], "permissions": [ { "path": "/var/log/kolla/keystone", "owner": "keystone:keystone", "recurse": true } ] }'  docker.io/tripleomaster/centos-binary-keystone:current-tripleo

Lets go back to simple things. What is inside the container? We can peek using:

$
podman run  docker.io/tripleomaster/centos-binary-keystone:current-tripleo ls

Basically, we can perform any command that will not last longer than the failed kolla initialization. No Bash prompts, but shorter single line bash commands work. We can see that mysql is uninitialized:

 podman run  docker.io/tripleomaster/centos-binary-keystone:current-tripleo cat /etc/keystone/keystone.conf | grep "connection ="
#connection = 

What about those config files that the initialization wants to copy:

podman run  docker.io/tripleomaster/centos-binary-keystone:current-tripleo ls /var/lib/kolla/config_files/src/etc/httpd/conf.d
ls: cannot access /var/lib/kolla/config_files/src/etc/httpd/conf.d: No such file or directory

So all that comes from external to the container, and is mounted at run time.

$ podman run  docker.io/tripleomaster/centos-binary-keystone:current-tripleo cat /etc/passwd  | grep keystone
keystone:x:42425:42425::/var/lib/keystone:/usr/sbin/nologin

Which owns the config and the log files.

$ podman run  docker.io/tripleomaster/centos-binary-keystone:current-tripleo ls -la /var/log/keystone
total 8
drwxr-x---. 2 keystone keystone 4096 Dec 17 08:28 .
drwxr-xr-x. 6 root     root     4096 Dec 17 08:28 ..
-rw-rw----. 1 root     keystone    0 Dec 17 08:28 keystone.log
$ podman run  docker.io/tripleomaster/centos-binary-keystone:current-tripleo ls -la /etc/keystone
total 128
drwxr-x---. 2 root     keystone   4096 Dec 17 08:28 .
drwxr-xr-x. 2 root     root       4096 Dec 19 16:30 ..
-rw-r-----. 1 root     keystone   2303 Nov 12 02:15 default_catalog.templates
-rw-r-----. 1 root     keystone 104220 Dec 14 01:09 keystone.conf
-rw-r-----. 1 root     keystone   1046 Nov 12 02:15 logging.conf
-rw-r-----. 1 root     keystone      3 Dec 14 01:09 policy.json
-rw-r-----. 1 keystone keystone    665 Nov 12 02:15 sso_callback_template.html
$ podman run  docker.io/tripleomaster/centos-binary-keystone:current-tripleo cat /etc/keystone/policy.json
{}

Yes, policy.json is empty.

Lets go back to the config file. I would rather not have to pass in all the config info as an environment variable each time. If I run as root, I can use the podman bind-mount option to relabel it:

 podman run -e KOLLA_CONFIG_FILE=/config.json  -e KOLLA_CONFIG_STRATEGY=COPY_ONCE   -v $PWD/config.json:/config.json:z   docker.io/tripleomaster/centos-binary-keystone:current-tripleo  

This eventually fails with the error message “no listening sockets available, shutting down” Which seems to be due to the lack of the httpd.conf entries for keystone:

# podman run -e KOLLA_CONFIG_FILE=/config.json  -e KOLLA_CONFIG_STRATEGY=COPY_ONCE   -v $PWD/config.json:/config.json:z   docker.io/tripleomaster/centos-binary-keystone:current-tripleo  ls /etc/httpd/conf.d
auth_mellon.conf
auth_openidc.conf
autoindex.conf
README
ssl.conf
userdir.conf
welcome.conf

The clue seems to be in the Heat Templates. There are a bunch of files that are expected to be in /var/lib/kolla/config_files/src in side the container. Here’s my version of the WSGI config file:

Listen 5000
Listen 35357

ServerSignature Off
ServerTokens Prod
TraceEnable off

ErrorLog /var/log/kolla/keystone/apache-error.log"

    CustomLog /var/log/kolla/keystone/apache-access.log" common


LogLevel info


    
        AllowOverride None
        Options None
        Require all granted
    




    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} python-path=/usr/lib/python2.7/site-packages
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    = 2.4>
      ErrorLogFormat "%{cu}t %M"
    
    ErrorLog "/var/log/kolla/keystone/keystone-apache-public-error.log"
    LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b %D \"%{Referer}i\" \"%{User-Agent}i\"" logformat
    CustomLog "/var/log/kolla/keystone/keystone-apache-public-access.log" logformat



    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} python-path=/usr/lib/python2.7/site-packages
    WSGIProcessGroup keystone-admin
    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    = 2.4>
      ErrorLogFormat "%{cu}t %M"
    
    ErrorLog "/var/log/kolla/keystone/keystone-apache-admin-error.log"
    LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b %D \"%{Referer}i\" \"%{User-Agent}i\"" logformat
    CustomLog "/var/log/kolla/keystone/keystone-apache-admin-access.log" logformat

So with a directory structure like this:

C[root@ayoungP40 kolla]find src/ -print
src/
src/etc
src/etc/keystone
src/etc/keystone/fernet-keys
src/etc/keystone/fernet-keys/1
src/etc/keystone/fernet-keys/0
src/etc/httpd
src/etc/httpd/conf.d
src/etc/httpd/conf.d/wsgi-keystone.conf

And a Kolla config.json file like this:

{
   "command": "/usr/sbin/httpd",
   "config_files": [
        {
              "source": "/var/lib/kolla/config_files/src/etc/keystone/fernet-keys",
              "dest": "/etc/keystone/fernet-keys",
              "merge": false,
              "preserve_properties": true
        },{
              "source": "/var/lib/kolla/config_files/src/etc/httpd/conf.d",
              "dest": "/etc/httpd/conf.d",
              "merge": false,
              "preserve_properties": true
        },{  
              "source": "/var/lib/kolla/config_files/src/*",
              "dest": "/",
              "merge": true,
              "preserve_properties": true
        }
    ],
    "permissions": [
	    {
            "path": "/var/log/kolla/keystone",
            "owner": "keystone:keystone",
            "recurse": true
        }
    ]
}

I can run Keystone like this:

podman run -e KOLLA_CONFIG_FILE=/config.json  -e KOLLA_CONFIG_STRATEGY=COPY_ONCE   -v $PWD/config.json:/config.json:z   -v $PWD/src:/var/lib/kolla/config_files/src:z  docker.io/tripleomaster/centos-binary-keystone:current-tripleo

by Adam Young at December 19, 2019 09:00 PM

Lars Kellogg-Stedman

OVN and DHCP: A minimal example

Introduction A long time ago, I wrote an article all about OpenStack Neutron (which at that time was called Quantum). That served as an excellent reference for a number of years, but if you've deployed a recent version of OpenStack you may have noticed that the network architecture looks completely different. The network namespaces previously used to implement routers and dhcp servers are gone (along with iptables rules and other features), and have been replaced by OVN (“Open Virtual Network”).

December 19, 2019 12:00 AM

December 18, 2019

Adam Young

keystone-db-init in OpenShift

Before I can run Keystone in a container, I need to initialize the database. This is as true for running in Kubernetes as it was using podman. Here’s how I got keystone-db-init to work.

The general steps were:

  • use oc new-app to generate the build-config and build
  • delete the deployment config generated by new-app
  • upload a secret containing keystone.conf
  • deploy a pod that uses the image built above and the secret version of keystone.conf to run keystone-manage db_init
oc delete deploymentconfig.apps.openshift.io/keystone-db-in

To upload the secret.

kubectl create secret generic keystone-conf --from-file=../keystone-db-init/keystone.conf

Here is the yaml definition for the pod

apiVersion: v1
kind: Pod
metadata:
  name: keystone-db-init-pod
  labels:
    app: myapp
spec:
  containers:
  - image: image-registry.openshift-image-registry.svc:5000/keystone/keystone-db-init
    imagePullPolicy: Always
    name: keystone-db-init
    volumeMounts:
    - name: keystone-conf
      mountPath: "/etc/keystone/"
  volumes:
  - name: keystone-conf
    secret:
      secretName: keystone-conf
      items:
      - key: keystone.conf
        path: keystone.conf
        mode: 511       
    command: ['sh', '-c', 'cat /etc/keystone/keystone.conf']

While this is running as the keystone unix account, I am not certain how that happened. I did use the patch command I talked about earlier on the deployment config, but you can see I am not using that in this pod. That is something I need to straighten out.

To test that the database was initialized:

$ oc get pods -l app=mariadb-keystone
NAME                       READY   STATUS    RESTARTS   AGE
mariadb-keystone-1-rxgvs   1/1     Running   0          9d
$ oc rsh mariadb-keystone-1-rxgvs
sh-4.2$ mysql -h mariadb-keystone -u keystone -pkeystone keystone
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 908
Server version: 10.2.22-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [keystone]> show tables;
+------------------------------------+
| Tables_in_keystone                 |
+------------------------------------+
| access_rule                        |
| access_token                       |
....
+------------------------------------+
46 rows in set (0.00 sec)

I’ve fooled myself in the past thinking that things have worked when they have note. To make sure I am not doing that now, I dropped the keystone database and recreated it from insider the mysql monitor program. I then re-ran the pod, and was able to see all of the tables.

by Adam Young at December 18, 2019 08:48 PM

December 16, 2019

RDO Blog

Community Blog Round Up 16 December 2019

We’re super chuffed that there’s already another article to read in our weekly blog round up – as we said before, if you write it, we’ll help others see it! But if you don’t write it, well, there’s nothing to set sail. Let’s hear about your latest adventures on the Ussuri river and if you’re NOT in our database, you CAN be by creating a pull request to https://github.com/redhat-openstack/website/blob/master/planet.ini.

Reading keystone.conf in a container by Adam Young

Step 3 of the 12 Factor app is to store config in the environment. For Keystone, the set of configuration options is controlled by the keystone.conf file. In an earlier attempt at containerizing the scripts used to configure Keystone, I had passed an environment variable in to the script that would then be written to the configuration file. I realize now that I want the whole keystone.conf external to the application. This allow me to set any of the configuration options without changing the code in the container. More importantly, it allows me to make the configuration information immutable inside the container, so that the applications cannot be hacked to change their own configuration options.

Read more at https://adam.younglogic.com/2019/12/reading-keystone-conf-in-a-container/

by Rain Leander at December 16, 2019 11:45 AM

December 12, 2019

Adam Young

Reading keystone.conf in a container

Step 3 of the 12 Factor app is to store config in the environment. For Keystone, the set of configuration options is controlled by the keystone.conf file. In an earlier attempt at containerizing the scripts used to configure Keystone, I had passed an environment variable in to the script that would then be written to the configuration file. I realize now that I want the whole keystone.conf external to the application. This allow me to set any of the configuration options without changing the code in the container. More importantly, it allows me to make the configuration information immutable inside the container, so that the applications cannot be hacked to change their own configuration options.

I was running the pod and mounting the local copy I had of the keystone.conf file using this command line:

podman run --mount type=bind,source=/home/ayoung/devel/container-keystone/keystone-db-init/keystone.conf,destination=/etc/keystone/keystone.conf:Z --add-host keystone-mariadb:10.89.0.47   --network maria-bridge  -it localhost/keystone-db-init 

It was returning with no output. To diagnose, I added on /bin/bash to the end of the command so I could poke around inside the running container before it exited.

podman run --mount /home/ayoung/devel/container-keystone/keystone-db-init/keystone.conf:/etc/keystone/keystone.conf    --add-host keystone-mariadb:10.89.0.47   --network maria-bridge  -it localhost/keystone-db-init /bin/bash

Once inside, I was able to look at the keystone log file. A Stack trasce made me realize that I was not able to actually read the file /etc/keystone/keystone.conf. Using ls I would show up like this:

-?????????? ? ?        ?             ?            ? keystone.conf:

It took a lot of trial and error to recitify it including:

  • adding a parallel entry to my hosts /etc/password and /etc/groups file for the keystone user and group
  • Ensuring that the file was owned by keystone outside the container
  • switching to the -v option to create the bind mount, as that allowed me to use the :Z option as well.
  • addingthe -u keystone option to the command line

The end command looked like this:

podman run -v /home/ayoung/devel/container-keystone/keystone-db-init/keystone.conf:/etc/keystone/keystone.conf:Z  -u keystone         --add-host keystone-mariadb:10.89.0.47   --network maria-bridge  -it localhost/keystone-db-init 

Once I had it correct, I could use the /bin/bash executable to again poke around inside the container. From the inside, I could run:

$ keystone-manage db_version
109
$ mysql -h keystone-mariadb -ukeystone -pkeystone keystone  -e "show databases;"
+--------------------+
| Database           |
+--------------------+
| information_schema |
| keystone           |
+--------------------+

Next up is to try this with OpenShift.

by Adam Young at December 12, 2019 12:09 AM

December 09, 2019

RDO Blog

Community Blog Round Up 09 December 2019

As we sail down the Ussuri river, Ben and Colleen report on their experiences at Shanghai Open Infrastructure Summit while Adam dives into Buildah.

Let’s Buildah Keystoneconfig by Adam Young

Buildah is a valuable tool in the container ecosystem. As an effort to get more familiar with it, and to finally get my hand-rolled version of Keystone to deploy on Kubernetes, I decided to work through building a couple of Keystone based containers with Buildah.

Read more at https://adam.younglogic.com/2019/12/buildah-keystoneconfig/

Oslo in Shanghai by Ben Nemec

Despite my trepidation about the trip (some of it well-founded!), I made it to Shanghai and back for the Open Infrastructure Summit and Project Teams Gathering. I even managed to get some work done while I was there. 🙂

Read more at http://blog.nemebean.com/content/oslo-shanghai

Shanghai Open Infrastructure Forum and PTG by Colleen Murphy

The Open Infrastructure Summit, Forum, and Project Teams Gathering was held last week in the beautiful city of Shanghai. The event was held in the spirit of cross-cultural collaboration and attendees arrived with the intention of bridging the gap with a usually faraway but significant part of the OpenStack community.

Read more at http://www.gazlene.net/shanghai-forum-ptg.html

by Rain Leander at December 09, 2019 12:24 PM

December 03, 2019

Adam Young

Let’s Buildah Keystoneconfig

Buildah is a valuable tool in the container ecosystem. As an effort to get more familiar with it, and to finally get my hand-rolled version of Keystone to deploy on Kubernetes, I decided to work through building a couple of Keystone based containers with Buildah.

First, I went with the simple approach of modifying my old Dockerfiles to a later release of OpenStack, and kick off the install using buildah. I went with Stein.

Why not Train? Because eventually I want to test 0 down time upgrades. More on that later

The buildah command was just:

 buildah bud -t keystone 

However, to make that work, I had to adjust the Dockerfile. Here is the diff:

diff --git a/keystoneconfig/Dockerfile b/keystoneconfig/Dockerfile
index 149e62f..cd5aa5c 100644
--- a/keystoneconfig/Dockerfile
+++ b/keystoneconfig/Dockerfile
@@ -1,11 +1,11 @@
-FROM index.docker.io/centos:7
+FROM docker.io/centos:7
 MAINTAINER Adam Young 
  
-RUN yum install -y centos-release-openstack-rocky &&\
+RUN yum install -y centos-release-openstack-stein &&\
     yum update -y &&\
     yum -y install openstack-keystone mariadb openstack-utils  &&\
     yum -y clean all
  
 COPY ./keystone-configure.sql /
 COPY ./configure_keystone.sh /
-CMD /configure_keystone.sh
\ No newline at end of file
+CMD /configure_keystone.sh

The biggest difference is that I had to specify the name of the base image without the “index.” prefix. Buildah is strictah (heh) in what it accepts.

I also updated the package to stein. When I was done, I had the following:

$ buildah images
REPOSITORY                 TAG      IMAGE ID       CREATED          SIZE
localhost/keystone         latest   e52d224fa8fe   13 minutes ago   509 MB
docker.io/library/centos   7        5e35e350aded   3 weeks ago      211 MB

What if I wanted to do these same things via manual steps? Following the advice from the community, I can translate from Dockerfile-ese to buildah. First, I can fetch the original image using the buildah from command:

container=$(buildah from docker.io/centos:7)
$ echo $container 
centos-working-container

Now Add things to the container. We don’t build a new layer with each command, so the && approach is not required. So for the yum installs:

buildah run $container yum install -y centos-release-openstack-stein
buildah run $container yum update -y
buildah run $container  yum -y install openstack-keystone mariadb openstack-utils
buildah run $container  yum -y clean all

To Get the files into the container, use the copy commands:

buildah copy $container  ./keystone-configure.sql / 
buildah copy $container ./configure_keystone.sh / 

The final steps: tell the container what command to run and commit it to an image.

buildah config --cmd /configure_keystone.sh $container
buildah commit $container keystone

What do we end up with?

$ buildah images
REPOSITORY                 TAG      IMAGE ID       CREATED              SIZE
localhost/keystone         latest   09981bc1e95a   About a minute ago   509 MB
docker.io/library/centos   7        5e35e350aded   3 weeks ago          211 MB

Since I have an old, hard-coded IP address for the MySQL server, it is going to fail. But lets see:

buildah run centos-working-container /configure_keystone.sh
2019-12-03T16:34:16.000691965Z: cannot configure rootless cgroup using the cgroupfs manager
Database

And there it hangs. We’ll work on that in a bit.

I committed the container before setting the author field. That should be a line like:
buildah config --author "ayoung@redhat.com"
to map line-to-line with the Dockerfile.

by Adam Young at December 03, 2019 04:43 PM

November 11, 2019

RDO Blog

Community Blog Round Up 11 November 2019

As we dive into the Ussuri development cycle, I’m sad to report that there’s not a lot of writing happening upstream.

If you’re one of those people waiting for a call to action, THIS IS IT! We want to hear about your story, your problem, your accomplishment, your analogy, your fight, your win, your loss – all of it.

And, in the meantime, Adam Young says it’s not that cloud is difficult, it’s networking! Fierce words, Adam. And a super fierce article to boot.

Deleting Trunks in OpenStack before Deleting Ports by Adam Young

Cloud is easy. It is networking that is hard.

Read more at https://adam.younglogic.com/2019/11/deleting-trunks-before-ports/

by Rain Leander at November 11, 2019 01:46 PM

November 07, 2019

Adam Young

Deleting Trunks in OpenStack before Deleting Ports

Cloud is easy. It is networking that is hard.

Red Hat supports installing OpenShift on OpenStack. As a Cloud SA, I need to be able to demonstrate this, and make it work for customers. As I was playing around with it, I found I could not tear down clusters due to a dependency issue with ports.


When building and tearing down network structures with Ansible, I had learned the hard way that there were dependencies. Routers came down before subnets, and so one. But the latest round had me scratching my head. I could not get ports to delete, and the error message was not a help.

I was able to figure out that the ports linked to security groups. In fact, I could unset almost all of the dependencies using the port set command line. For example:

openstack port set openshift-q5nqj-master-port-1  --no-security-group --no-allowed-address --no-tag --no-fixed-ip

However, I still could not delete the ports. I did notice that there was a trunk_+details section at the bottom of the port show output:

trunk_details         | {'trunk_id': 'dd1609af-4a90-4a9e-9ea4-5f89c63fb9ce', 'sub_ports': []} 

But there is no way to “unset” that. It turns out I had it backwards. You need to delete the port first. A message from Kristi Nikolla:

the port is set as the parent for a “trunk” so you need to delete the trunk firs

Kristi In IRC
<pre lang="bash">curl -H "x-auth-token: $TOKEN" https://kaizen.massopen.cloud:13696/v2.0/trunks/</pre>

It turns out that you can do this with the CLI…at least I could.

$ openstack network trunk show 01a19e41-49c6-467c-a726-404ffedccfbb
FieldValue
admin_state_up UP
created_at 2019-11-04T02:58:08Z
description
id 01a19e41-49c6-467c-a726-404ffedccfbb
name openshift-zq7wj-master-trunk-1
port_id 6f4d1ecc-934b-4d29-9fdd-077ffd48b7d8
project_id b9f1401936314975974153d78b78b933
revision_number 3
status DOWN
sub_ports
tags [‘openshiftClusterID=openshift-zq7wj’]
tenant_id b9f1401936314975974153d78b78b933
updated_at 2019-11-04T03:09:49Z

Here is the script I used to delete them. Notice that the status was DOWN for all of the ports I wanted gone.

for PORT in $( openstack port list | awk '/DOWN/ {print $2}' ); do TRUNK_ID=$( openstack port show $PORT -f json | jq  -r '.trunk_details | .trunk_id ') ; echo port  $PORT has trunk $TRUNK_ID;  openstack network trunk delete $TRUNK_ID ; done

Kristi had used the curl command because he did not have the network trunk option in his CLI. Turns out he needed to install python-neutronclient first.

by Adam Young at November 07, 2019 07:27 PM

October 31, 2019

RDO Blog

RDO Train Released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Train for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Train is the 20th release from the OpenStack project, which is the work of more than 1115 contributors from around the world.

The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/7/cloud/x86_64/openstack-train/. While we normally also have the release available via http://mirror.centos.org/altarch/7/cloud/ppc64le/ and http://mirror.centos.org/altarch/7/cloud/aarch64/ – there have been issues with the mirror network which is currently being addressed via https://bugs.centos.org/view.php?id=16590.

The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.

All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.

PLEASE NOTE: At this time, RDO Train provides packages for CentOS7 only. We plan to move RDO to use CentOS8 as soon as possible during Ussuri development cycle so Train will be the last release working on CentOS7.

Interesting things in the Train release include:

  • Openstack Ansible, which provides ansible playbooks and roles for deployment, added murano support and fully migrated to systemd-journald from rsyslog. This project makes deploying OpenStack from source in a way that makes it scalable while also being simple to operate, upgrade, and grow.
  • Ironic, the Bare Metal service, aims to produce an OpenStack service and associated libraries capable of managing and provisioning physical machines in a security-aware and fault-tolerant manner. Beyond providing basic support for building software RAID and a myriad of other highlights, this project now offers a new tool for building ramdisk images, ironic-python-agent-builder.

Other improvements include:

  • Tobiko is now available within RDO! This project is an OpenStack testing framework focusing on areas mostly complementary to Tempest. While the tempest main focus has been testing OpenStack rest APIs, the main Tobiko focus would be to test OpenStack system operations while “simulating” the use of the cloud as the final user would. Tobiko’s test cases populate the cloud with workloads such as instances, allows the CI workflow to perform an operation such as an update or upgrade, and then runs test cases to validate that the cloud workloads are still functional.
  • Other highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/train/highlights.html.

Contributors
During the Train cycle, we saw the following new RDO contributors:

  • Joel Capitao
  • Zoltan Caplovic
  • Sorin Sbarnea
  • Sławek Kapłoński
  • Damien Ciabrini
  • Beagles
  • Soniya Vyas
  • Kevin Carter (cloudnull)
  • fpantano
  • Michał Dulko
  • Stephen Finucane
  • Sofer Athlan-Guyot
  • Gauvain Pocentek
  • John Fulton
  • Pete Zaitcev

Welcome to all of you and Thank You So Much for participating!

But we wouldn’t want to overlook anyone. A super massive Thank You to all 65 contributors who participated in producing this release. This list includes commits to rdo-packages and rdo-infra repositories:

  • Adam Kimball
  • Alan Bishop
  • Alex Schultz
  • Alfredo Moralejo
  • Arx Cruz
  • Beagles
  • Bernard Cafarelli
  • Bogdan Dobrelya
  • Brian Rosmaita
  • Carlos Goncalves
  • Cédric Jeanneret
  • Chandan Kumar
  • Damien Ciabrini
  • Daniel Alvarez
  • David Moreau Simard
  • Dmitry Tantsur
  • Emilien Macchi
  • Eric Harney
  • fpantano
  • Gael Chamoulaud
  • Gauvain Pocentek
  • Jakub Libosvar
  • James Slagle
  • Javier Peña
  • Joel Capitao
  • John Fulton
  • Jon Schlueter
  • Kashyap Chamarthy
  • Kevin Carter (cloudnull)
  • Lee Yarwood
  • Lon Hohberger
  • Luigi Toscano
  • Luka Peschke
  • marios
  • Martin Kopec
  • Martin Mágr
  • Matthias Runge
  • Michael Turek
  • Michał Dulko
  • Michele Baldessari
  • Natal Ngétal
  • Nicolas Hicher
  • Nir Magnezi
  • Otherwiseguy
  • Gabriele Cerami
  • Pete Zaitcev
  • Quique Llorente
  • Radomiropieralski
  • Rafael Folco
  • Rlandy
  • Sagi Shnaidman
  • shrjoshi
  • Sławek Kapłoński
  • Sofer Athlan-Guyot
  • Soniya Vyas
  • Sorin Sbarnea
  • Stephen Finucane
  • Steve Baker
  • Steve Linabery
  • Tobias Urdin
  • Tony Breeds
  • Tristan de Cacqueray
  • Victoria Martinez de la Cruz
  • Wes Hayutin
  • Yatin Karel
  • Zoltan Caplovic

The Next Release Cycle
At the end of one release, focus shifts immediately to the next, Ussuri, which has an estimated GA the week of 11-15 May 2020. The full schedule is available at https://releases.openstack.org/ussuri/schedule.html.

Twice during each release cycle, RDO hosts official Test Days shortly after the first and third milestones; therefore, the upcoming test days are 19-20 December 2019 for Milestone One and 16-17 April 2020 for Milestone Three.

Get Started
There are three ways to get started with RDO.

To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.

For a production deployment of RDO, use the TripleO Quickstart and you’ll be running a production cloud in short order.

Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.

Get Help
The RDO Project participates in a Q&A service at https://ask.openstack.org. We also have our users@lists.rdoproject.org for RDO-specific users and operrators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org.

The #rdo channel on Freenode IRC is also an excellent place to find and give help.

We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues.

Get Involved
To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.

Join us in #rdo and #tripleo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube.

by Rain Leander at October 31, 2019 04:18 PM

October 22, 2019

RDO Blog

Cycle Trailing Projects and RDO’s Latest Release Train

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Train for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Train is the 20th release from the OpenStack project, which is the work of more than 1115 contributors from around the world.

The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/7/cloud/x86_64/openstack-train/.

BUT!

This is not the official announcement you’re looking for.

We’re doing something a little different this cycle – we’re waiting for some of the “cycle-trailing” projects that we’re particularly keen about, like TripleO and Kolla, to finish their push BEFORE we make the official announcement.

Photo by Denis Chick on Unsplash

Deployment and lifecycle-management tools generally want to follow the release cycle, but because they rely on the other projects being completed, they may not always publish their final release at the same time as those projects. To that effect, they may choose the cycle-trailing release model.

Cycle-trailing projects are given an extra three months after the final release date to request publication of their release. They may otherwise use intermediary releases or development milestones.

While we’re super hopeful that these cycle trailing projects will be uploaded to the CentOS mirror before OpenInfrastructure Summit Shanghai, we’re going to do the official announcement just before the Summit with or without the packages.

We’ve got a lot of people to thank!

Do you like that we’re waiting a bit for our cycle trailing projects or would you prefer the official announcement as soon as the main projects are available? Let us know in the comments and we may adjust the process for future releases!

In the meantime, keep an eye here or on the mailing lists for the official announcement COMING SOON!

by Rain Leander at October 22, 2019 02:34 PM

October 21, 2019

RDO Blog

Community Blog Round Up 21 October 2019

Just in time for Halloween, Andrew Beekhof has a ghost story about the texture of hounds.

But first!

Where have all the blog round ups gone?!?

Well, there’s the rub, right?

We don’t usually post when there’s one or less posts from our community to round up, but this has been the only post for WEEKS now, so here it is.

Thanks, Andrew!

But that brings us to another point.

We want to hear from YOU!

RDO has a database of bloggers who write about OpenStack / RDO / TripleO / Packstack things and while we’re encouraging those people to write, we’re also wondering if we’re missing some people. Do you know of a writer who is not included in our database? Let us know in the comments below.

Photo by Jessica Furtney on Unsplash

Savaged by Softdog, a Cautionary Tale by Andrew Beekhof

Hardware is imperfect, and software contains bugs. When node level failures occur, the work required from the cluster does not decrease – affected workloads need to be restarted, putting additional stress on surviving peers and making it important to recover the lost capacity.

Read more at http://blog.clusterlabs.org/blog/2019/savaged-by-softdog

by Rain Leander at October 21, 2019 09:17 AM

October 11, 2019

Andrew Beekhof

Savaged by Softdog, a Cautionary Tale

Hardware is imperfect, and software contains bugs. When node level failures occur, the work required from the cluster does not decrease - affected workloads need to be restarted, putting additional stress on surviving peers and making it important to recover the lost capacity.

Additionally, some of workloads may require at-most-one semantics.  Failures affecting these kind of workloads risk data loss and/or corruption if ”lost” nodes remain at least partially functional.  For this reason the system needs to know that the node has reached a safe state before initiating recovery of the workload.  

The process of putting the node into a safe state is called fencing, and the HA community generally prefers power based methods because they provide the best chance of also recovering capacity without human involvement.

There are two categories of fencing which I will call direct and indirect but could equally be called active and passive.

Direct methods involve action on the part of surviving peers, such interacting with an IPMI or iLO device, whereas indirect methods rely on the failed node to somehow recognise it is in an unhealthy state and take steps to enter a safe state on its own.

The most common form of indirect fencing is the use of a watchdog. The watchdog’s timer is reset every N seconds unless quorum is lost or the part of the software stack fails. If the timer (usually some multiple of N) expires, then the the watchdog will panic (not shutdown) the machine.

When done right, watchdogs can allow survivors to safely assume that missing nodes have entered a safe state after a defined period of time.

However when relying on indirect fencing mechanisms, it is important to recognise that in the absence of out-of-band communication such as disk based heartbeats, surviving peers have absolutely no ability to validate that the lost node ever reaches a safe state, surviving peers are making an assumption when they start recovery. There is a risk it didn’t happen as planned and the cost of getting it wrong is data corruption and/or loss.

Nothing is without risk though. Someone with an overdeveloped sense of paranoia and an infinite budget could buy all of Amazon, plus Microsoft and Google for redundancy, to host a static website - and still be undone by an asteroid. The goal of HA is not to eliminate risk, but reduce it to an acceptable level. What constitutes an acceptable risk varies person-to-person, project-to-project, and company-to-company, however as a community we encourage people to start by eliminating single points of failure (SPoF).

In the absence of direct fencing mechanisms, we like hardware based watchdogs because as a self-contained device they can panic the machine without involvement from the host OS. If the watchdog fails, the node is still healthy and data loss can only occur through failure of additional nodes. In the event of a power outage, they also loose power but the node is already safe. A network failure is no longer a SPoF and would require a software bug (incorrect quorum calculations for example) in order to present a problem.

There is one last class of failures, software bugs, that are the primary concern HA and kernel experts whenever Softdog is put forward in situations where already purchased cluster machines lack both power management and watchdog hardware.

Softdog malfunctions originating in software can take two forms - resetting a machine when it should not have (false positive), and not resetting a machine when it should have (false negative). False positives will reduce overall availability due to repeated failovers, but the integrity of the system and its data will remain intact.

More concerning is the possibility for a single software bug to both cause a node to become unavailable and prevent softdog from recovering the system. One option for this is a bug in a device or device driver, such as a tight loop or bad spinlock usage, that causes the system bus to lock up. In such a scenario the watchdog timer would expire, but the softdog would not be able to trigger a reboot. In this state it is not be possible to recover the cluster’s capacity without human intervention, and in theory the entire machine is in a state that prevents it from being able to receive or act on client requests - although perhaps not always (unfortunately the interesting parts of the bug are private).

If the customer needs guaranteed reboot, they should install a hardware watchdog.

— Mikulas Patocka (Red Hat kernel engineer)

The greatest danger of softdog, is that most of the time it appears to work just fine. For months or years it will reboot your machines in response to network and software outages, only to fail you when just the wrong conditions are met.

Imagine a pointer error, the kind that corrupts the kernel’s internal structures and causes kernel panics. Rarely triggered, but one day you get unlucky and the area of memory that gets scribbled on includes the softdog.

Just like all the other times it causes the machine to misbehave, but the surviving peers detect it, wait a minute or two, and then begin recovery. Application services are started, volumes are mounted, database replicas are promoted to master, VIPs are brought up, and requests start being processed.

However unlike all the other times, the failed peer is still active because the softdog has been corrupted, the application services remain responsive and nothing has removed VIPs or demoted masters.

At this point, your best case scenario is that database and storage replication is broken. Requests from some clients will go to the failed node, and some will go to its replacement. Both will succeed, volumes and databases will be updated independently of what happened on the other peer. Reads will start to return stale or otherwise inaccurate data, and incorrect decisions will be made based on them. No transactions will be lost, however the longer the split remains, the further the datasets will drift apart and the more work it will be to reconcile them by hand once the situation is discovered.

Things get worse if replication doesn’t break. Now you have the prospect of uncoordinated parallel access to your datasets. Even if database locking is still somehow working, eventually those changes are persisted to disk and there is nothing to prevent both sides from writing different versions of the same backing file due to non-overlapping database updates.

Depending on the timing and scope of the updates, you could get:

  • only whole file copies from the second writer and loose transactions from the first,
  • whole file copies from a mixture of hosts, leading to a corrupted on-disk representation,
  • files which contain a mixture of bits from both hosts, also leading to a corrupted on-disk representation, or
  • all of the above.

Ironically an admin’s first instinct, to restart the node or database and see if that fixes the situation, might instead wipe out the only remaining consistent copy of their data (asuming the entire database fits in memory). At which point all transactions since the previous backup are lost.

To mitigate this situation, you would either need very frequent backups, or add a SCSI based fencing mechanism to ensure exclusive access to shared storage, and a network based mechanism to prevent requests from reaching the failed peer.

Or you could just use a hardware watchdog (even better, try a network power switch).

by Andrew Beekhof (andrew@beekhof.net) at October 11, 2019 02:55 AM

October 03, 2019

RDO Blog

RDO is ready to ride the wave of CentOS Stream

The announcement and availability of CentOS Stream has the potential to improve RDO’s feedback loop to Red Hat Enterprise Linux (RHEL) development and smooth out transitions between minor and major releases. Let’s take a look at where RDO interacts with the CentOS Project and how this may improve our work and releases.

RDO and the CentOS Project

Because of tight coupling with the operating system, RDO project joined the CentOS SIGs initiative from the beginning. CentOS SIGs are smaller groups within the CentOS Project community focusing on a specific area or software type. RDO was a founding member of the CentOS Cloud SIG that is focusing on cloud infrastructure software stacks and is using the CentOS Community BuildSystem (CBS) to build final releases.

In addition to Cloud SIG OpenStack repositories, during release development RDO Trunk repositories provide packages for new commits in OpenStack projects soon after they are merged upstream. After commits are merged a new package is created and a YUM repository is published in RDO Trunk server, including this new package build and the latest builds for the rest of packages in the same release.This enables packagers to identify packaging issues almost immediately after they are introduced, shortening the feedback loop to the upstream projects.

How CentOS Stream can help

A stable base operating system, on which continuously changing upstream code is built and tested, is a prerequisite. While CentOS Linux did come close to this ideal, there were still occasional changes in the base OS that were breaking OpenStack CI, especially after a minor CentOS Linux release where it was not possible to catch those changes before they were published.

The availability of rolling-release CentOS Stream, announced alongside CentOS Linux 8,  will help enable our developers to provide earlier feedback to the CentOS and RHEL development cycles before breaking changes are published. When breaking changes are necessary, it will help us adjust for them ahead of time.

A major release like CentOS Linux 8 is even more of a challenge, RDO has managed to transition from EL6 to EL7 during the OpenStack Icehouse cycle by doing two distributions in parallel – five years ago, with a much smaller package set than it is now.

For the current OpenStack Train release in development, the RDO project started preparing for the Python 3 transition using Fedora 28, which helped to get this huge migration effort going, at the same time it was only a rough approximation for RHEL 8/CentOS Linux 8 and required complete re-testing on RHEL.

Since CentOS Linux 8 is released very closely to the OpenStack Train release, the RDO project will be able to provide RDO Train initially only on EL7 platform and will add CentOS Linux 8 support in RDO Train soon after.

For the future releases, the RDO project is looking forward to be able to start testing and developing against CentOS Stream updates as they are developed, to provide feedback, and help stabilize the base OS platform for everyone!

About The RDO Project

The RDO project is providing a freely-available, community-supported distribution of OpenStack that runs on Red Hat Enterprise Linux (RHEL) and its derivatives, such as CentOS Linux. RDO also makes the latest OpenStack code available for continuous testing while the release is under development.

In addition to providing a set of software packages, RDO is also a community of users of cloud computing platforms on Red Hat-based operating systems where you can go to get help and compare notes on running OpenStack.

by apevec at October 03, 2019 08:26 PM

Lars Kellogg-Stedman

TM-V71A and Linux, part 1: Programming mode

I recently acquired my Technician amateur radio license, and like many folks my first radio purchase was a Baofeng UV-5R. Due to its low cost, this is a very popular radio, and there is excellent open source software available for programming it in the form of the CHIRP project. After futzing around with the UV-5R for a while, I wanted to get something a little nicer for use at home, so I purchased a Kenwood TM-V71A.

October 03, 2019 12:00 AM

August 13, 2019

RDO Blog

Community Blog Round Up 13 August 2019

Making Host and OpenStack iSCSI devices play nice together by geguileo

OpenStack services assume that they are the sole owners of the iSCSI connections to the iSCSI portal-targets generated by the Cinder driver, and that is fine 98% of the time, but what happens when we also want to have other non-OpenStack iSCSI volumes from that same storage system present on boot? In OpenStack the OS-Brick […]

Read more at https://gorka.eguileor.com/host-iscsi-devices/

Service Assurance on small OpenShift Cluster by mrunge

This article is intended to give an overview on how to test the

Read more at http://www.matthias-runge.de/2019/07/09/Service-Assurance-on-ocp/

Notes on testing a tripleo-common mistral patch by JohnLikesOpenStack

I recently ran into bug 1834094 and wanted to test the proposed fix. These are my notes if I have to do this again.

Read more at http://blog.johnlikesopenstack.com/2019/07/notes-on-testing-tripleo-common-mistral.html

Developer workflow with TripleO by Emilien

In this post we’ll see how one can use TripleO for developing & testing changes into OpenStack Python-based projects (e.g. Keystone).

Read more at https://my1.fr/blog/developer-workflow-with-tripleo/

Avoid rebase hell: squashing without rebasing by OddBit

You’re working on a pull request. You’ve been working on a pull request for a while, and due to lack of sleep or inebriation you’ve been merging changes into your feature branch rather than rebasing. You now have a pull request that looks like this (I’ve marked merge commits with the text [merge]):

Read more at https://blog.oddbit.com/post/2019-06-17-avoid-rebase-hell-squashing-wi/

Git Etiquette: Commit messages and pull requests by OddBit

Always work on a branch (never commit on master) When working with an upstream codebase, always make your changes on a feature branch rather than your local master branch. This will make it easier to keep your local master branch current with respect to upstream, and can help avoid situations in which you accidentally overwrite your local changes or introduce unnecessary merge commits into your history.

Read more at https://blog.oddbit.com/post/2019-06-14-git-etiquette-commit-messages/

Running Keystone with Docker Compose by OddBit

In this article, we will look at what is necessary to run OpenStack’s Keystone service (and the requisite database server) in containers using Docker Compose.

Read more at https://blog.oddbit.com/post/2019-06-07-running-keystone-with-docker-c/

The Kubernetes in a box project by Carlos Camacho

Implementing cloud computing solutions that runs in hybrid environments might be the final solution when comes to finding the best benefits/cost ratio.

Read more at https://www.anstack.com/blog/2019/05/21/kubebox.html

Running Relax-and-Recover to save your OpenStack deployment by Carlos Camacho

ReaR is a pretty impressive disaster recovery solution for Linux. Relax-and-Recover, creates both a bootable rescue image and a backup of the associated files you choose.

Read more at https://www.anstack.com/blog/2019/05/20/relax-and-recover-backups.html

by Rain Leander at August 13, 2019 08:00 AM

July 23, 2019

Gorka Eguileor

Making Host and OpenStack iSCSI devices play nice together

OpenStack services assume that they are the sole owners of the iSCSI connections to the iSCSI portal-targets generated by the Cinder driver, and that is fine 98% of the time, but what happens when we also want to have other non-OpenStack iSCSI volumes from that same storage system present on boot? In OpenStack the OS-Brick […]

by geguileo at July 23, 2019 05:49 PM

July 09, 2019

Matthias Runge

Service Assurance on small OpenShift Cluster

This article is intended to give an overview on how to test the Service Assurance Framework on a small deployment of OpenShift.

I've started with a deployment of RHEL 7.x on a baremetal machine.

After using subscribtion-manager to subscribe and attach, I made sure to have the necessary repositories …

by mrunge at July 09, 2019 02:50 PM

July 03, 2019

John Likes OpenStack

Notes on testing a tripleo-common mistral patch

I recently ran into bug 1834094 and wanted to test the proposed fix. These are my notes if I have to do this again.

Get a patched container

Because the mistral-executor is running as a container on the undercloud I needed to build a new container and TripleO's Container Image Preparationhelped me do this without too much trouble.

As described the Container Image Preparation docs, I already download a local copy of the containers to my undercloud by running the following:


time sudo openstack tripleo container image prepare \
-e ~/train/containers.yaml \
--output-env-file ~/containers-env-file.yaml
where ~/train/containers.yaml has the following:

---
parameter_defaults:
NeutronMechanismDrivers: ovn
ContainerImagePrepare:
- push_destination: 192.168.24.1:8787
set:
ceph_image: daemon
ceph_namespace: docker.io/ceph
ceph_tag: v4.0.0-stable-4.0-nautilus-centos-7-x86_64
name_prefix: centos-binary
namespace: docker.io/tripleomaster
tag: current-tripleo

I now want to download the same set of containers to my undercloud but I want the mistral-executor container to have the proposed fix. If I vist the review and click download I can see the patch is at refs/changes/60/668560/3 and I can pass this information to TripleO's Container Image Preparationso that it builds me a container with that patch applied.

To do this I update my containers.yaml to exclude the mistral-executor container from the usual tags with the excludes list directive and then create a separate section with the includes directive specific to the mistral-executor container.

Within this new section I ask that the tripleo-modify-image ansible role pull that patch and apply it to that source image.


---
parameter_defaults:
NeutronMechanismDrivers: ovn
ContainerImagePrepare:
- push_destination: 192.168.24.1:8787
set:
ceph_image: daemon
ceph_namespace: docker.io/ceph
ceph_tag: v4.0.0-stable-4.0-nautilus-centos-7-x86_64
name_prefix: centos-binary
namespace: docker.io/tripleomaster
tag: current-tripleo
excludes: [mistral-executor]
- push_destination: 192.168.24.1:8787
set:
name_prefix: centos-binary
namespace: docker.io/tripleomaster
tag: current-tripleo
modify_role: tripleo-modify-image
modify_append_tag: "-devel-ps3"
modify_vars:
tasks_from: dev_install.yml
source_image: docker.io/tripleomaster/centos-binary-mistral-executor:current-tripleo
refspecs:
-
project: tripleo-common
refspec: refs/changes/60/668560/3
includes: [mistral-executor]

When I then run the `sudo openstack tripleo container image prepare` command I see that it took a few extra steps to create my new container image.


Writing manifest to image destination
Storing signatures
INFO[0005] created - from /var/lib/containers/storage/overlay/10c5e9ec709991e7eb6cbbf99c08d87f9f728c1644d64e3b070bc3c81adcbc03/diff
and /var/lib/containers/storage/overlay-layers/10c5e9ec709991e7eb6cbbf99c08d87f9f728c1644d64e3b070bc3c81adcbc03.tar-split.gz (wrote 150320640 bytes)
Completed modify and upload for image docker.io/tripleomaster/centos-binary-mistral-executor:current-tripleo
Removing local copy of 192.168.24.1:8787/tripleomaster/centos-binary-mistral-executor:current-tripleo
Removing local copy of 192.168.24.1:8787/tripleomaster/centos-binary-mistral-executor:current-tripleo-devel-ps3
Output env file exists, moving it to backup.

If I were deploying the mistral container in the overcloud I could just use 'openstack overcloud deploy ... -e ~/containers-env-file.yaml' and be done, but because I need to replace my mistral-executor container on my undercloud I have to do a few manual steps.

Run the patched container on the undercloud

My undercloud is ready to serve the patched mistral-executor container but it doesn't yet have its own copy of it to run; i.e. I only see the original container:


(undercloud) [stack@undercloud train]$ sudo podman images | grep exec
docker.io/tripleomaster/centos-binary-mistral-executor current-tripleo 1f0ed5edc023 9 days ago 1.78 GB
(undercloud) [stack@undercloud train]$
However, the same undercloud will serve it from the following URL:

(undercloud) [stack@undercloud train]$ grep executor ~/containers-env-file.yaml
ContainerMistralExecutorImage: 192.168.24.1:8787/tripleomaster/centos-binary-mistral-executor:current-tripleo-devel-ps3
(undercloud) [stack@undercloud train]$
So we can pull it down so we can run it on the undercloud:

sudo podman pull 192.168.24.1:8787/tripleomaster/centos-binary-mistral-executor:current-tripleo-devel-ps3
I now want to stop the running mistral-executor container and start my new one in it's place. As per Debugging with Paunch I can use the print-cmd action to extract the command which is used to start the mistral-executor container and save it to a shell script:

sudo paunch debug --file /var/lib/tripleo-config/container-startup-config-step_4.json --container mistral_executor --action print-cmd > start_executor.sh
I'll also add the exact container image name to the shell script

sudo podman images | grep ps3 >> start_executor.sh
Next I'll edit the script to update the container name and make sure the container is named mistral_executor:

vim start_executor.sh
Before I restart the container I'll prove that the current container isn't running the patch (the same command later will prove that it is).

(undercloud) [stack@undercloud train]$ sudo podman exec mistral_executor grep render /usr/lib/python2.7/site-packages/tripleo_common/utils/config.py
# string so it's rendered in a readable format.
template_data = deployment_template.render(
template_data = host_var_server_template.render(
(undercloud) [stack@undercloud train]$
Stop the mistral-executor container with systemd (otherwise it will automatically restart).

sudo systemctl stop tripleo_mistral_executor.service
Remove the container with podman to ensure the name is not in use:

sudo podman rm mistral_executor
Start the new container:

sudo bash start_executor.sh
and now I'll verify that my new container does have the patch:

(undercloud) [stack@undercloud train]$ sudo podman exec mistral_executor grep render /usr/lib/python2.7/site-packages/tripleo_common/utils/config.py
def render_network_config(self, stack, config_dir, server_roles):
# string so it's rendered in a readable format.
template_data = deployment_template.render(
template_data = host_var_server_template.render(
self.render_network_config(stack, config_dir, server_roles)
(undercloud) [stack@undercloud train]$
For a bonus, I also see it fixed the bug.

(undercloud) [stack@undercloud tripleo-heat-templates]$ openstack overcloud config download --config-dir config-download
Starting config-download export...
config-download export successful
Finished config-download export.
Extracting config-download...
The TripleO configuration has been successfully generated into: config-download
(undercloud) [stack@undercloud tripleo-heat-templates]$

by Unknown (noreply@blogger.com) at July 03, 2019 08:04 PM

June 21, 2019

Emilien Macchi

Developer workflow with TripleO

In this post we’ll see how one can use TripleO for developing & testing changes into OpenStack Python-based projects (e.g. Keystone).

 

Even if Devstack remains a popular tool, it is not the only one you can use for your development workflow.

TripleO hasn’t only been built for real-world deployments but also for developers working on OpenStack related projects like Keystone for example.

Let’s say, my Keystone directory where I’m writing code is in /home/emilien/git/openstack/keystone.

Now I want to deploy TripleO with that change and my code in Keystone. For that I will need a server (can be a VM) with at least 8GB of RAM, 4 vCPU and 80GB of disk, 2 NICs and CentOS7 or Fedora28 installed.

Prepare the repositories and install python-tripleoclient:

If you’re deploying on recent Fedora or RHEL8, you’ll need to install python3-tripleoclient.

Now, let’s prepare your environment and deploy TripleO:

Note: change the YAML for your own needs if needed. If you need more help on how to configure Standalone, please check out the official manual.

Now let’s say your code needs a change and you need to retest it. Once you modified your code, just run:

Now, if you need to test a review that is already pushed in Gerrit and you want to run a fresh deployment with it, you can do it with:

I hope these tips helped you to understand how you can develop and test any OpenStack Python-based project without pain, and pretty quickly. On my environment, the whole deployment takes less than 20 minutes.

Please give any feedback in comment or via email!

by Emilien at June 21, 2019 05:07 PM

June 17, 2019

Lars Kellogg-Stedman

Avoid rebase hell: squashing without rebasing

You're working on a pull request. You've been working on a pull request for a while, and due to lack of sleep or inebriation you've been merging changes into your feature branch rather than rebasing. You now have a pull request that looks like this (I've marked merge commits with the text [merge]): 7e181479 Adds methods for widget sales 0487162 [merge] Merge remote-tracking branch 'origin/master' into my_feature 76ee81c [merge] Merge branch 'my_feature' of https://github.

June 17, 2019 12:00 AM

June 14, 2019

Lars Kellogg-Stedman

Git Etiquette: Commit messages and pull requests

Always work on a branch (never commit on master) When working with an upstream codebase, always make your changes on a feature branch rather than your local master branch. This will make it easier to keep your local master branch current with respect to upstream, and can help avoid situations in which you accidentally overwrite your local changes or introduce unnecessary merge commits into your history. Rebase instead of merge If you need to incorporate changes from the upstream master branch in the feature branch on which you are currently doing, bring in those changes using git rebase rather than git merge.

June 14, 2019 12:00 AM

June 07, 2019

Lars Kellogg-Stedman

Running Keystone with Docker Compose

In this article, we will look at what is necessary to run OpenStack's Keystone service (and the requisite database server) in containers using Docker Compose. Running MariaDB The standard mariadb docker image can be configured via a number of environment variables. It also benefits from persistent volume storage, since in most situations you don't want to lose your data when you remove a container. A simple docker command line for starting MariaDB might look something like:

June 07, 2019 12:00 AM

May 30, 2019

Website and blog of Pablo Iranzo Gómez

Emerging Tech VLC‘19 - Citellus - Automatización de comprobaciones

Citellus:

Citellus - Verifica tus sistemas!!

https://citellus.org

Emerging Tech Valencia 2019: 30 Mayo

¿Quién soy?

Involucrado con Linux desde algo antes de comenzar los estudios universitarios y luego durante ellos, estando involucrado con las asociaciones LinUV y .

Empecé a ‘vivir’ del software libre en 2004 y a trabajar en Red Hat en 2006 como Consultor, luego como Senior Technical Account Manager, posteriormente como Principal Software Maintenance Engineer y actualmente como Senior Software Engineer para el equipo de Solutions Engineering.

¿Qué es Citellus?

  • Citellus es un framework acompañado de scripts creados por la comunidad, que automatizan la detección de problemas, incluyendo problemas de configuración, conflictos con paquetes de versiones instaladas, problemas de seguridad o configuraciones inseguras y mucho más.

Historia: ¿cómo comenzó el proyecto?

  • Un fin de semana de guardia revisando una y otra vez la mismas configuraciones en diversos equipos hicieron evidente la necesidad de automatizar.

  • Unos scripts sencillos y un ‘wrapper’ en bash después, la herramienta fue tomando forma, poco después, se reescribió el ‘wrapper’ en python para proporcionarle características más avanzadas.

  • En esos primeros momentos también se mantuvieron conversaciones con ingeniería y como resultado, un nuevo diseño de los tests más sencillo fue adoptado.

¿Qué puedo hacer con Citellus?

  • Ejecutarlo contra un sistema o contra un sosreport.
  • Resolver antes los problemas gracias a la información que proporciona.
  • Utilizar los plugins para detectar problemas actuales o futuros (ciclo de vida, etc).
  • Programar nuevos plugins en tu lenguaje de programación preferido (bash, python, ruby, etc.) para extender la funcionalidad.
    • Contribuir al proyecto esos nuevos plugins para beneficio de otros.
  • Utilizar dicha información como parte de acciones proactivas en sus sistemas.

¿Algún ejemplo de la vida real?

  • Por ejemplo, con Citellus puedes detectar:
    • Borrados incorrectos de tokens de keystone
    • Parámetros faltantes para expirar y purgar datos de ceilometer que pueden llevar a llenar el disco duro.
    • NTP no sincronizado
    • paquetes obsoletos que están afectados por fallos críticos o de seguridad.
    • otros! (860+) complementos en este momento, con más de una comprobación por plugin en muchos de ellos
  • Cualquier otra cosa que puedas imaginar o programar 😉

Cambios derivados de ejemplos reales?

  • Inicialmente trabajábamos con RHEL únicamente (6, 7 y 8) por ser las soportadas
  • Dado que trabajamos con otros equipos internos como RHOS-OPS que utilizan por ejemplo RDO project, la versión upstream de Red Hat OpenStack, comenzamos a adaptar tests para funcionar en ambas.
  • A mayores, empezamos a crear funciones adicionales para operar sobre sistemas Debian y un compañero estuvo también enviando propuestas para corregir algunos fallos sobre Arch Linux.
  • Con la aparición de Spectre y Meltdown empezamos a añadir también comprobación de algunos paquetes y que no se hayan deshabilitado las opciones para proteger frente a dichos ataques.

Algunos números sobre plugins:

- healthcheck : 79 [] - informative : 2 [] - negative : 3 [‘system: 1’, ‘system/iscsi: 1’] - openshift : 5 [] - openstack : 4 [‘rabbitmq: 1’] - ovirt-rhv : 1 [] - pacemaker : 2 [] - positive : 35 [‘cluster/cman: 1’, ‘openstack: 16’, ‘openstack/ceilometer: 1’, ‘system: 1’] - rhinternal : 697 [‘bugzilla/docker: 1’, ‘bugzilla/httpd: 1’, ‘bugzilla/openstack/ceilometer: 1’, ‘bugzilla/openstack/ceph: 1’, ‘bugzilla/openstack/cinder: 1’, ‘bugzilla/openstack/httpd: 1’, ‘bugzilla/openstack/keystone: 1’, ‘bugzilla/openstack/keystone/templates: 1’, ‘bugzilla/openstack/neutron: 5’, ‘bugzilla/openstack/nova: 4’, ‘bugzilla/openstack/swift: 1’, ‘bugzilla/openstack/tripleo: 2’, ‘bugzilla/systemd: 1’, ‘ceph: 4’, ‘cifs: 5’, ‘docker: 1’, ‘httpd: 1’, ‘launchpad/openstack/keystone: 1’, ‘launchpad/openstack/oslo.db: 1’, ‘network: 7’, ‘ocp-pssa/etcd: 1’, ‘ocp-pssa/master: 12’, ‘ocp-pssa/node: 14’, ‘openshift/cluster: 1’, ‘openshift/etcd: 2’, ‘openshift/node: 1’, ‘openshift/ocp-pssa/master: 2’, ‘openstack: 6’, ‘openstack/ceilometer: 2’, ‘openstack/ceph: 1’, ‘openstack/cinder: 5’, ‘openstack/containers: 4’, ‘openstack/containers/docker: 2’, ‘openstack/containers/rabbitmq: 1’, ‘openstack/crontab: 4’, ‘openstack/glance: 1’, ‘openstack/haproxy: 2’, ‘openstack/hardware: 1’, ‘openstack/iptables: 1’, ‘openstack/keystone: 3’, ‘openstack/mysql: 8’, ‘openstack/network: 6’, ‘openstack/neutron: 5’, ‘openstack/nova: 12’, ‘openstack/openvswitch: 3’, ‘openstack/pacemaker: 1’, ‘openstack/rabbitmq: 5’, ‘openstack/redis: 1’, ‘openstack/swift: 3’, ‘openstack/system: 4’, ‘openstack/systemd: 1’, ‘pacemaker: 10’, ‘satellite: 1’, ‘security: 3’, ‘security/meltdown: 2’, ‘security/spectre: 8’, ‘security/speculative-store-bypass: 8’, ‘storage: 1’, ‘sumsos/bugzilla: 11’, ‘sumsos/kbases: 426’, ‘supportability: 11’, ‘sysinfo: 2’, ‘system: 56’, ‘virtualization: 2’] - supportability : 3 [‘openshift: 1’] - sysinfo : 18 [‘lifecycle: 6’, ‘openshift: 4’, ‘openstack: 2’] - system : 12 [‘iscsi: 1’] - virtualization : 1 [] ———- total : 862

El Objetivo

  • Hacer extremadamente sencillo escribir nuevos plugins.
  • Permitir escribirlos en tu lenguaje de programación preferido.
  • Que sea abierto para que cualquiera pueda contribuir.

Cómo ejecutarlo?

A destacar

  • plugins en su lenguaje preferido
  • Permite sacar la salida a un fichero json para ser procesada por otras herramientas.
    • Permite visualizar via html el json generado
  • Soporte de playbooks ansible (en vivo y también contra un sosreport si se adaptan)
    • Las extensiones (core, ansible), permiten extender el tipo de plugins soportado fácilmente.
  • Salvar/restaurar la configuración
  • Instalar desde pip/pipsi si no quieres usar el git clone del repositorio o ejecutar desde un contenedor.

Interfaz HTML

  • Creado al usar —web, abriendo fichero citellus.html por http se visualiza.

¿Por qué upstream?

  • Citellus es un proyecto de código abierto. Todos los plugins se envían al repositorio en github para compartirlos (es lo que queremos fomentar, reutilización del conocimiento).
  • Cada uno es experto en su área: queremos que todos contribuyan
  • Utilizamos un acercamiento similar a otros proyectos de código abierto: usamos gerrit para revisar el código y UnitTesting para validar la funcionalidad básica.

¿Cómo contribuir?

Actualmente hay una gran presencia de plugins de OpenStack, ya que es en ese área donde trabajamos diariamente, pero Citellus no está limitado a una tecnología o producto.

Por ejemplo, es fácil realizar comprobaciones acerca de si un sistema está configurado correctamente para recibir actualizaciones, comprobar versiones específicas con fallos (Meltdown/Spectre) y que no hayan sido deshabilitadas las protecciones, consumo excesivo de memoria por algún proceso, fallos de autentificación, etc.

Lea la guía del colaborador en: https://github.com/citellusorg/citellus/blob/master/CONTRIBUTING.md para más detalles.

Citellus vs otras herramientas

  • XSOS: Proporciona información de datos del sistema (ram, red, etc), pero no analiza, a los efectos es un visor ‘bonito’ de información.

  • TripleO-validations: se ejecuta solamente en sistemas ‘en vivo’, poco práctico para realizar auditorías o dar soporte.

¿Por qué no sosreports?

  • No hay elección entre una u otra, SOS recoge datos del sistema, Citellus los analiza.
  • Sosreport viene en los canales base de RHEL, Debian que hacen que esté ampliamente distribuido, pero también, dificulta el recibir actualizaciones frecuentes.
  • Muchos de los datos para diagnóstico ya están en los sosreports, falta el análisis.
  • Citellus se basa en fallos conocidos y es fácilmente extensible, necesita ciclos de desarrollo más cortos, estando más orientado a equipos de devops o de soporte.

¿Qué hay bajo el capó?

Filosofía sencilla:

  • Citellus es el ‘wrapper’ que ejecuta.
  • Permite especificar la carpeta con el sosreport
  • Busca los plugins disponibles en el sistema
  • Lanza los plugins contra cada sosreport y devuelve el estado.
  • El framework de Citellus en python permite manejo de opciones, filtrado, ejecución paralela, etc.

¿Y los plugins?

Los plugins son aún más sencillos:

  • En cualquier lenguaje que pueda ser ejecutado desde una shell.
  • Mensajes de salida a ‘stderr’ (>&2)
  • Si en bash se utilizan cadenas como $”cadena”, se puede usar el soporte incluido de i18n para traducirlos al idioma que se quiera.
  • Devuelve $RC_OKAY si el test es satisfactorio / $RC_FAILED para error / $RC_SKIPPED para los omitidos / Otro para fallos no esperados.

¿Y los plugins? (continuación)

  • Heredan variables del entorno como la carpeta raíz para el sosreport (vacía en modo Live) (CITELLUS_ROOT) o si se está ejecutando en modo live (CITELLUS_LIVE). No se necesita introducir datos vía el teclado
  • Por ejemplo los tests en ‘vivo’ pueden consultar valores en la base de datos y los basados en sosreport, limitarse a los logs existentes.

Ejemplo de script

#!/bin/bash

# Load common functions
[ -f "${CITELLUS_BASE}/common-functions.sh" ] && . "${CITELLUS_BASE}/common-functions.sh"

# description: error if disk usage is greater than $CITELLUS_DISK_MAX_PERCENT
: ${CITELLUS_DISK_MAX_PERCENT=75}

if [[ $CITELLUS_LIVE = 0 ]]; then
    is_required_file "${CITELLUS_ROOT}/df"
    DISK_USE_CMD="cat ${CITELLUS_ROOT}/df"
else
    DISK_USE_CMD="df -P"
fi

result=$($DISK_USE_CMD |awk -vdisk_max_percent=$CITELLUS_DISK_MAX_PERCENT '/^\/dev/ && substr($5, 0, length($5)-1) > disk_max_percent { print $6,$5 }')

if [ -n "$result" ]; then
    echo "${result}" >&2
    exit $RC_FAILED
else
    exit $RC_OKAY
fi

¿Listos para profundizar en los plugins?

  • Cada plugin debe validar si debe o no ejecutarse y mostrar la salida a ‘stderr’, código de retorno.
  • Citellus ejecutará e informará de los tests en base a los filtros usados.

Requisitos:

  • El código de retorno debe ser $RC_OKAY (ok), $RC_FAILED (fallo) or $RC_SKIPPED (omitido).
  • Los mensajes impresos a stderr se muestran si el plugin falla o se omite (si se usa el modo detallado)
  • Si se ejecuta contra un ‘sosreport’, la variable CITELLUS_ROOT tiene la ruta a la carpeta del sosreport indicada.
  • CITELLUS_LIVE contiene 0 ó 1 si es una ejecución en vivo o no.

¿Cómo empezar un nuevo plugin (por ejemplo)?

  • Crea un script en ~/~/.../plugins/core/rhev/hosted-engine.sh
  • chmod +x hosted-engine.sh

¿Cómo empezar un nuevo plugin (continuación)?

if [ "$CITELLUS_LIVE" = "0" ]; then
    grep -q ovirt-hosted-engine-ha $CITELLUS_ROOT/installed-rpms
    returncode=$?
    if [ "x$returncode" == "x0" ]; then
        exit $RC_OKAY
    else
        echo "ovirt-hosted-engine no instalado" >&2
        exit $RC_FAILED
    fi
else
    echo "No funciona en modo Live" >&2
    exit $RC_SKIPPED
fi

¿Cómo empezar un nuevo plugin (con funciones)?

# Load common functions
[ -f "${CITELLUS_BASE}/common-functions.sh" ] && . "${CITELLUS_BASE}/common-functions.sh"

if is_rpm ovirt-hosted-engine-ha; then
    exit $RC_OKAY
else
    echo "ovirt-hosted-engine no instalado" >&2
    exit $RC_FAILED
fi

¿Cómo probar un plugin?

  • Use tox para ejecutar algunas pruebas UT (utf8, bashate, python 2.7, python 3)

  • Diga a Citellus qué plugin utilizar: sh [piranzo@host citellus]$ ~/citellus/citellus.py sosreport-20170724-175510/crta02 -i hosted-engine.sh -r _________ .__ __ .__ . _ _ ||/ |_ __ | | | | __ __ ______ /   /|   / | | | | | | / /   _| || |   /| || || | /_
    ______ /||| ___ >____/____/____//____ > / / / mode: fs snapshot sosreport-20170724-175510/crta02 # //…/plugins/core/rhev/hosted-engine.sh: failed “ovirt-hosted-engine no instalado”

¿Qué es Magui?

Introducción

  • Citellus trabaja a nivel de sosreport individual, pero algunos problemas se manifiestan entre conjuntos de equipos (clústeres, virtualización, granjas, etc)

Por ejemplo, Galera debe comprobar el seqno entre los diversos miembros para ver cúal es el que contiene los datos más actualizados.

Qué hace M.a.g.u.i. ?

  • Ejecuta citellus contra cada sosreport o sistema, obtiene los datos y los agrupa por plugin.
  • Ejecuta sus propios plugins contra los datos obtenidos, destacando problemas que afectan al conjunto.
  • Permite obtener datos de equipos remotos via ansible-playbook.

¿Qué aspecto tiene?

  • Viene en el mismo repositorio que Citellus y se ejecuta especificando los diversos sosreports:

    [piranzo@collab-shell]$ ~/citellus/magui.py * -i seqno
        _
    _( )_  Magui:
    (_(ø)_)
    /(_)   Multiple Analisis Generic Unifier and Interpreter
    \|
    |/
    
    ....
    
    [piranzo@collab-shell]]$ cat magui.json:
    
    {'~/~/.../core/openstack/mysql/seqno.sh': {'controller0': {'err': u'2b65adb0-787e-11e7-81a8-26480628c14c:285019879\n',
                                                                'out': u'',
                                                                'rc': 10},
                                                'controller1': {'err': u'2b65adb0-787e-11e7-81a8-26480628c14c:285019879\n',
                                                                'out': u'',
                                                                'rc': 10},
                                                'controller2': {'err': u'2b65adb0-787e-11e7-81a8-26480628c14c:285019878\n',
                                                                'out': u'',
                                                                'rc': 10}}}
  • En este ejemplo (UUID and SEQNO se muestra para cada controlador y vemos que el controller2 tiene una sequencia distinta y menos actualizada.

Siguientes pasos con Magui?

  • Dispone de algunos plugins en este momento:
    • Agregan data de citellus ordenada por plugin para comparar rápidamente
    • Muestra los datos de ‘metadatos’ de forma separada para contrastar valores
    • pipeline-yaml, policy.json y otros (asociados a OpenStack)
    • seqno de galera
    • redhat-release entre equipos
    • Faraday: compara ficheros que deban ser iguales o distintos entre equipos

Siguientes pasos

  • Más plugins!
  • Dar a conocer la herramienta para entre todos, facilitar la resolución de problemas, detección de fallos de seguridad, configuraciones incorrectas, etc.
  • Movimiento: Muchas herramientas mueren por tener un único desarrollador trabajando en sus ratos libres, tener contribuciones es básico para cualquier proyecto.
  • Programar más tests en Magui para identificar más casos dónde los problemas aparecen a nivel de grupos de sistemas y no a nivel de sistema sindividuales.

Otros recursos

Blog posts:

¿Preguntas?

Gracias por asistir!!

Ven a #citellus en Freenode, https://t.me/citellusUG en Telegram o contacta con nosotros:

Presentación disponible en:

https://iranzo.github.io

by Pablo Iranzo Gómez at May 30, 2019 05:30 PM

May 21, 2019

Carlos Camacho

The Kubernetes in a box project

Implementing cloud computing solutions that runs in hybrid environments might be the final solution when comes to finding the best benefits/cost ratio.

This post will be the main thread to build and describe the KIAB/Kubebox project (www.kubebox.org and/or www.kiab.org).

Spoiler alert!

The name

First thing first, the name.. I have in my mind two names having the same meaning. The first one is KIAB (Kubernetes In A Box) this name came to my mind as the Kiai sound from karatekas (practitioners of karate). The second one is more traditional, “Kubebox”. I have no preference but it would be awesome if you help me to decide the official name for this project.

Add a comment and contribute to select the project name!

Introduction

This project is about to integrate together already market available devices to run cloud software as an appliance.

The proof-of-concept delivered in this series of posts will allow people to put a well-known set of hardware devices into a single chassis for either create their own cloud appliances, research and development, continuous integration, testing, home labs, staging or production-ready environments or simply just for fun.

Hereby it’s humbly presented to you the design of KubeBox/KIAB an open chassis specification for building cloud appliances.

The case enclosure is fully designed, and hopefully in the last phases for building the first set of enclosures, now, the posts will appear in the mean time I have some free cycles for writing the overall description.

Use cases

Several use cases can be defined to run on a KubeBox chassis.

  • AWS outpost.
  • Development environments.
  • EDGE.
  • Production Environments for small sites.
  • GitLab CI integration.
  • Demos for summits and conferences.
  • R&D: FPGA usage, deep learning, AI, TensorFlow, among many others.
  • Marketing WOW effect.
  • Training.

Enclosure design

The enclosure is designed as a rackable unit, using 7U. It tries to minimize the space used to deploy an up to 8-node cluster with redundancy for both power and networking.

Cloud appliance description

This build will be described across several sub-posts linked from this main thread. The posts will be created particularly without any specific order depending on my availability.

  • Backstory and initial parts selection.
  • Designing the case part 1: Design software.
  • A brief introduction to CAD software.
  • Designing the case part 2: U’s, brakes, and ghosts.
  • Designing the case part 3: Sheet thickness and bend radius.
  • Designing the case part 4: Parts Allowance (finish, tolerance, and fit).
  • Designing the case part 5: Vent cutouts and frickin’ laser beams!.
  • Designing the case part 6: Self-clinching nuts and standoffs.
  • Designing the case part 7: The standoffs strike back.
  • A brief primer on screws and PEMSERTs.
  • Designing the case part 8: Implementing PEMSERTs and screws.
  • Designing the case part 9: Bend reliefs and flat patterns.
  • Designing the case part 10: Tray caddy, to be used with GPU, Mother boards, disks, any other peripherals you want to add to the enclosure.
  • Designing the case part 11: Components rig.
  • Designing the case part 12: Power supply.
  • Designing the case part 13: Networking.
  • Designing the case part 14: 3D printed supports.
  • Designing the case part 15: Adding computing power.
  • Designing the case part 16: Adding Storage.
  • Designing the case part 17: Front display and bastion for automation.
  • Manufacturing the case part 1: PEMSERT installation.
  • Manufacturing the case part 2: Bending metal.
  • Manufacturing the case part 3: Bending metal.
  • KubeBox cloud appliance in detail!.
  • Manufacturing the case part 0: Getting quotes.
  • Manufacturing the case part 1: Getting the cases.
  • Software deployments: Reference architecture.
  • Design final source files for the enclosure design.
  • KubeBox is fully functional.

Update log:

2019/05/21: Initial version.

by Carlos Camacho at May 21, 2019 12:00 AM

May 20, 2019

Carlos Camacho

Running Relax-and-Recover to save your OpenStack deployment

ReaR is a pretty impressive disaster recovery solution for Linux. Relax-and-Recover, creates both a bootable rescue image and a backup of the associated files you choose.

When doing disaster recovery of a system, this Rescue Image plays the files back from the backup and so in the twinkling of an eye the latest state.

Various configuration options are available for the rescue image. For example, slim ISO files, USB sticks or even images for PXE servers are generated. As many backup options are possible. Starting with a simple archive file (eg * .tar.gz), various backup technologies such as IBM Tivoli Storage Manager (TSM), EMC NetWorker (Legato), Bacula or even Bareos can be addressed.

The ReaR written in Bash enables the skilful distribution of Rescue Image and if necessary archive file via NFS, CIFS (SMB) or another transport method in the network. The actual recovery process then takes place via this transport route.

In this specific case, due to the nature of the OpenStack deployment we will choose those protocols that are allowed by default in the Iptables rules (SSH, SFTP in particular).

But enough with the theory, here’s a practical example of one of many possible configurations. We will apply this specific use of ReaR to recover a failed control plane after a critical maintenance task (like an upgrade).

01 - Prepare the Undercloud backup bucket.

We need to prepare the place to store the backups from the Overcloud. From the Undercloud, check you have enough space to make the backups and prepare the environment. We will also create a user in the Undercloud with no shell access to be able to push the backups from the controllers or the compute nodes.

groupadd backup
mkdir /data
useradd -m -g backup -d /data/backup backup
echo "backup:backup" | chpasswd
chown -R backup:backup /data
chmod -R 755 /data

02 - Run the backup from the Overcloud nodes.

Let’s install some required packages and run some previous configuration steps.

#Install packages
sudo yum install rear genisoimage syslinux lftp wget -y

#Make sure you are able to use sshfs to store the ReaR backup
sudo yum install fuse -y
sudo yum groupinstall "Development tools" -y
wget http://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/f/fuse-sshfs-2.10-1.el7.x86_64.rpm
sudo rpm -i fuse-sshfs-2.10-1.el7.x86_64.rpm

sudo mkdir -p /data/backup
sudo sshfs -o allow_other backup@undercloud-0:/data/backup /data/backup
#Use backup password, which is... backup

Now, let’s configure ReaR config file.

#Configure ReaR
sudo tee -a "/etc/rear/local.conf" > /dev/null <<'EOF'
OUTPUT=ISO
OUTPUT_URL=sftp://backup:backup@undercloud-0/data/backup/
BACKUP=NETFS
BACKUP_URL=sshfs://backup@undercloud-0/data/backup/
BACKUP_PROG_COMPRESS_OPTIONS=( --gzip )
BACKUP_PROG_COMPRESS_SUFFIX=".gz"
BACKUP_PROG_EXCLUDE=( '/tmp/*' '/data/*' )
EOF

Now run the backup, this should create an ISO image in the Undercloud node (/data/backup/).

You will be asked for the backup user password

sudo rear -d -v mkbackup

Now, simulate a failure xD

# sudo rm -rf /lib

After the ISO image is created, we can proceed to verify we can restore it from the Hypervisor.

03 - Prepare the hypervisor.

# Enable the use of fusefs for the VMs on the hypervisor
setsebool -P virt_use_fusefs 1

# Install some required packages
sudo yum install -y fuse-sshfs

# Mount the Undercloud backup folder to access the images
mkdir -p /data/backup
sudo sshfs -o allow_other root@undercloud-0:/data/backup /data/backup
ls /data/backup/*

04 - Stop the damaged controller node.

virsh shutdown controller-0
# virsh destroy controller-0

# Wait until is down
watch virsh list --all

# Backup the guest definition
virsh dumpxml controller-0 > controller-0.xml
cp controller-0.xml controller-0.xml.bak

Now, we need to change the guest definition to boot from the ISO file.

Edit controller-0.xml and update it to boot from the ISO file.

Find the OS section,add the cdrom device and enable the boot menu.

<os>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='yes'/>
</os>

Edit the devices section and add the CDROM.

<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/data/backup/rear-controller-0.iso'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>

Update the guest definition.

virsh define controller-0.xml

Restart and connect to the guest

virsh start controller-0
virsh console controller-0

You should be able to see the boot menu to start the recover process, select Recover controller-0 and follow the instructions.

Now, before proceeding to run the controller restore, it’s possible that the host undercloud-0 can’t be resolved, just:

echo "192.168.24.1 undercloud-0" >> /etc/hosts

Having resolved the Undercloud host, just follow the wizard, Relax And Recover :)

You yould see a message like:

Welcome to Relax-and-Recover. Run "rear recover" to restore your system !

RESCUE controller-0:~ # rear recover

The image restore should progress quickly.

Continue to see the restore evolution.

Now, each time you reboot the node will have the ISO file as the first boot option so it’s something we need to fix. In the mean time let’s check if the restore went fine.

Reboot the guest booting from the hard disk.

Now we can see that the guest VM started successfully.

Now we need to restore the guest to it’s original definition, so from the Hypervisor we need to restore the controller-0.xml.bak file we created.

#From the Hypervisor
virsh shutdown controller-0
watch virsh list --all
virsh define controller-0.xml.bak
virsh start controller-0

Enjoy.

Considerations:

  • Space.
  • Multiple protocols supported but we might then to update firewall rules, that’s why I prefered SFTP.
  • Network load when moving data.
  • Shutdown/Starting sequence for HA control plane.
  • Do we need to backup the data plane?
  • User workloads should be handled by a third party backup software.

Update log:

2019/05/20: Initial version.

2019/06/18: Appeared in OpenStack Superuser blog.

by Carlos Camacho at May 20, 2019 12:00 AM

May 16, 2019

Website and blog of Pablo Iranzo Gómez

Emerging Tech VLC‘19: Citellus - Automatización de comprobaciones

Citellus:

Citellus - Verifica tus sistemas!!

https://citellus.org

Emerging Tech Valencia 2019: 16 Mayo

¿Quién soy?

Involucrado con Linux desde algo antes de comenzar los estudios universitarios y luego durante ellos, estando involucrado con las asociaciones LinUV y Valux.org.

Empecé a ‘vivir’ del software libre en 2004 y a trabajar en Red Hat en 2006 como Consultor, luego como Technical Account Manager y ahora como Software Maintenance Engineer.

¿Qué es Citellus?

  • Citellus proporciona un framework acompañado de scripts creados por la comunidad, que automatizan la detección de problemas, incluyendo problemas de configuración, conflictos con paquetes de versiones instaladas, problemas de seguridad o configuraciones inseguras y mucho más.

Historia: ¿cómo comenzó el proyecto?

  • Un fin de semana de guardia revisando una y otra vez las mismas configuraciones en diversos hosts sembró la semilla.

  • Unos scripts sencillos y un ‘wrapper’ en bash después, la herramienta fue tomando forma, poco después, se reescribió el ‘wrapper’ en python para proporcionarle características más avanzadas.

  • En esos primeros momentos también se mantuvieron conversaciones con ingeniería y como resultado, un nuevo diseño de los tests más sencillo fue adoptado.

¿Qué puedo hacer con Citellus?

  • Ejecutarlo contra un sistema en vivo o contra un sosreport.
  • Resolver problemas antes gracias a la información que proporciona.
  • Utilizar los plugins para detectar problemas actuales o futuros.
  • Programar nuevos plugins en tu lenguaje de programación preferido (bash, python, ruby, etc.) para extender la funcionalidad.
    • Contribuir al proyecto esos nuevos plugins para beneficio de otros.
  • Utilizar dicha información como parte de acciones proactivas en sus sistemas.

¿Algún ejemplo de la vida real?

  • Por ejemplo, con Citellus puedes detectar:
    • Borrados incorrectos de tokens de keystone
    • Parámetros faltantes para expirar y purgar datos de ceilometer que pueden llevar a llenar el disco duro.
    • NTP no sincronizado
    • paquetes obsoletos que están afectados por fallos críticos o de seguridad.
    • otros! (850+) complementos en este momento, con más de una comprobación por plugin en muchos de ellos
  • Cualquier otra cosa que puedas imaginar o programar 😉

Cambios derivados de ejemplos reales?

  • Inicialmente trabajábamos con RHEL únicamente (6 y 7) por ser las soportadas
  • Dado que trabajamos con otros equipos internos como RHOS-OPS que utilizan por ejemplo RDO project, la versión upstream de Red Hat OpenStack, comenzamos a adaptar tests para funcionar en ambas.
  • A mayores, empezamos a crear funciones adicionales para operar sobre sistemas Debian y un compañero estuvo también enviando propuestas para corregir algunos fallos sobre Arch Linux.
  • Con la aparición de Spectre y Meltdown empezamos a añadir también comprobación de algunos paquetes y que no se hayan deshabilitado las opciones para proteger frente a dichos ataques.

Algunos números sobre plugins:

- healthcheck : 79 [] - informative : 2 [] - negative : 3 [‘system: 1’, ‘system/iscsi: 1’] - openshift : 5 [] - openstack : 4 [‘rabbitmq: 1’] - ovirt-rhv : 1 [] - pacemaker : 2 [] - positive : 35 [‘cluster/cman: 1’, ‘openstack: 16’, ‘openstack/ceilometer: 1’, ‘system: 1’] - rhinternal : 697 [‘bugzilla/docker: 1’, ‘bugzilla/httpd: 1’, ‘bugzilla/openstack/ceilometer: 1’, ‘bugzilla/openstack/ceph: 1’, ‘bugzilla/openstack/cinder: 1’, ‘bugzilla/openstack/httpd: 1’, ‘bugzilla/openstack/keystone: 1’, ‘bugzilla/openstack/keystone/templates: 1’, ‘bugzilla/openstack/neutron: 5’, ‘bugzilla/openstack/nova: 4’, ‘bugzilla/openstack/swift: 1’, ‘bugzilla/openstack/tripleo: 2’, ‘bugzilla/systemd: 1’, ‘ceph: 4’, ‘cifs: 5’, ‘docker: 1’, ‘httpd: 1’, ‘launchpad/openstack/keystone: 1’, ‘launchpad/openstack/oslo.db: 1’, ‘network: 7’, ‘ocp-pssa/etcd: 1’, ‘ocp-pssa/master: 12’, ‘ocp-pssa/node: 14’, ‘openshift/cluster: 1’, ‘openshift/etcd: 2’, ‘openshift/node: 1’, ‘openshift/ocp-pssa/master: 2’, ‘openstack: 6’, ‘openstack/ceilometer: 2’, ‘openstack/ceph: 1’, ‘openstack/cinder: 5’, ‘openstack/containers: 4’, ‘openstack/containers/docker: 2’, ‘openstack/containers/rabbitmq: 1’, ‘openstack/crontab: 4’, ‘openstack/glance: 1’, ‘openstack/haproxy: 2’, ‘openstack/hardware: 1’, ‘openstack/iptables: 1’, ‘openstack/keystone: 3’, ‘openstack/mysql: 8’, ‘openstack/network: 6’, ‘openstack/neutron: 5’, ‘openstack/nova: 12’, ‘openstack/openvswitch: 3’, ‘openstack/pacemaker: 1’, ‘openstack/rabbitmq: 5’, ‘openstack/redis: 1’, ‘openstack/swift: 3’, ‘openstack/system: 4’, ‘openstack/systemd: 1’, ‘pacemaker: 10’, ‘satellite: 1’, ‘security: 3’, ‘security/meltdown: 2’, ‘security/spectre: 8’, ‘security/speculative-store-bypass: 8’, ‘storage: 1’, ‘sumsos/bugzilla: 11’, ‘sumsos/kbases: 426’, ‘supportability: 11’, ‘sysinfo: 2’, ‘system: 56’, ‘virtualization: 2’] - supportability : 3 [‘openshift: 1’] - sysinfo : 18 [‘lifecycle: 6’, ‘openshift: 4’, ‘openstack: 2’] - system : 12 [‘iscsi: 1’] - virtualization : 1 [] ——- total : 862

El Objetivo

  • Hacer extremadamente sencillo escribir nuevos plugins.
  • Permitir escribirlos en tu lenguaje de programación preferido.
  • Que sea abierto para que cualquiera pueda contribuir.

Cómo ejecutarlo?

A destacar

  • plugins en su lenguaje preferido
  • Permite sacar la salida a un fichero json para ser procesada por otras herramientas.
    • Permite visualizar via html el json generado
  • Soporte de playbooks ansible (en vivo y también contra un sosreport si se adaptan)
    • Las extensiones (core, ansible), permiten extender el tipo de plugins soportado fácilmente.
  • Salvar/restaurar la configuración
  • Instalar desde pip/pipsi si no quieres usar el git clone del repositorio o ejecutar desde un contenedor.

Interfaz HTML

  • Creado al usar –web, abriendo fichero citellus.html por http se visualiza.

¿Por qué upstream?

  • Citellus es un proyecto de código abierto. Todos los plugins se envían al repositorio en github para compartirlos (es lo que queremos fomentar, reutilización del conocimiento).
  • Cada uno es experto en su área: queremos que todos contribuyan
  • Utilizamos un acercamiento similar a otros proyectos de código abierto: usamos gerrit para revisar el código y UnitTesting para validar la funcionalidad básica.

¿Cómo contribuir?

Actualmente hay una gran presencia de plugins de OpenStack, ya que es en ese área donde trabajamos diariamente, pero Citellus no está limitado a una tecnología o producto.

Por ejemplo, es fácil realizar comprobaciones acerca de si un sistema está configurado correctamente para recibir actualizaciones, comprobar versiones específicas con fallos (Meltdown/Spectre) y que no hayan sido deshabilitadas las protecciones, consumo excesivo de memoria por algún proceso, fallos de autenticación, etc.

Lea la guía del colaborador en : https://github.com/citellusorg/citellus/blob/master/CONTRIBUTING.md para más detalles.

Citellus vs otras herramientas

  • XSOS: Proporciona información de datos del sistema (ram, red, etc), pero no analiza, a los efectos es un visor ‘bonito’ de información.

  • TripleO-validations: se ejecuta solamente en sistemas ‘en vivo’, poco práctico para realizar auditorías o dar soporte.

¿Por qué no sosreports?

  • No hay elección entre una u otra, SOS recoge datos del sistema, Citellus los analiza.
  • Sosreport viene en los canales base de RHEL, Debian que hacen que esté ampliamente distribuido, pero también, dificulta el recibir actualizaciones frecuentes.
  • Muchos de los datos para diagnóstico ya están en los sosreports, falta el análisis.
  • Citellus se basa en fallos conocidos y es fácilmente extensible, necesita ciclos de desarrollo más cortos, estando más orientado a equipos de devops o de soporte.

¿Qué hay bajo el capó?

Filosofía sencilla:

  • Citellus es el ‘wrapper’ que ejecuta.
  • Permite especificar la carpeta con el sosreport
  • Busca los plugins disponibles en el sistema
  • Lanza los plugins contra cada sosreport y devuelve el estado.
  • El framework de Citellus en python permite manejo de opciones, filtrado, ejecución paralela, etc.

¿Y los plugins?

Los plugins son aún más sencillos:

  • En cualquier lenguaje que pueda ser ejecutado desde una shell.
  • Mensajes de salida a ‘stderr’ (>&2)
  • Si en bash se utilizan cadenas como $“cadena”, se puede usar el soporte incluido de i18n para traducirlos al idioma que se quiera.
  • Devuelve $RC_OKAY si el test es satisfactorio / $RC_FAILED para error / $RC_SKIPPED para los omitidos / Otro para fallos no esperados.

¿Y los plugins? (continuación)

  • Heredan variables del entorno como la carpeta raíz para el sosreport (vacía en modo Live) (CITELLUS_ROOT) o si se está ejecutando en modo live (CITELLUS_LIVE). No se necesita introducir datos vía el teclado
  • Por ejemplo los tests en ‘vivo’ pueden consultar valores en la base de datos y los basados en sosreport, limitarse a los logs existentes.

Ejemplo de script

¿Listos para profundizar en los plugins?

  • Cada plugin debe validar si debe o no ejecutarse y mostrar la salida a ‘stderr’, código de retorno.
  • Citellus ejecutará e informará de los tests en base a los filtros usados.

Requisitos:

  • El código de retorno debe ser $RC_OKAY (ok), $RC_FAILED (fallo) or $RC_SKIPPED (omitido).
  • Los mensajes impresos a stderr se muestran si el plugin falla o se omite (si se usa el modo detallado)
  • Si se ejecuta contra un ‘sosreport’, la variable CITELLUS_ROOT tiene la ruta a la carpeta del sosreport indicada.
  • CITELLUS_LIVE contiene 0 ó 1 si es una ejecución en vivo o no.

¿Cómo empezar un nuevo plugin (por ejemplo)?

  • Crea un script en ~/~/.../plugins/core/rhev/hosted-engine.sh
  • chmod +x hosted-engine.sh

¿Cómo empezar un nuevo plugin (continuación)?

¿Cómo empezar un nuevo plugin (con funciones)?

¿Cómo probar un plugin?

  • Use tox para ejecutar algunas pruebas UT (utf8, bashate, python 2.7, python 3)

  • Diga a Citellus qué plugin utilizar:

¿Qué es Magui?

Introducción

  • Citellus trabaja a nivel de sosreport individual, pero algunos problemas se manifiestan entre conjuntos de equipos (clústeres, virtualización, granjas, etc)

Por ejemplo, Galera debe comprobar el seqno entre los diversos miembros para ver cúal es el que contiene los datos más actualizados.

Qué hace M.a.g.u.i. ?

  • Ejecuta citellus contra cada sosreport o sistema, obtiene los datos y los agrupa por plugin.
  • Ejecuta sus propios plugins contra los datos obtenidos, destacando problemas que afectan al conjunto.
  • Permite obtener datos de equipos remotos via ansible-playbook.

¿Qué aspecto tiene?

Siguientes pasos con Magui?

  • Dispone de algunos plugins en este momento:
    • Agregan data de citellus ordenada por plugin para comparar rápidamente
    • Muestra los datos de ‘metadatos’ de forma separada para contrastar valores
    • pipeline-yaml, policy.json y otros (asociados a OpenStack)
    • seqno de galera
    • redhat-release entre equipos
    • Faraday: compara ficheros que deban ser iguales o distintos entre equipos

Siguientes pasos

  • Más plugins!
  • Dar a conocer la herramienta para entre todos, facilitar la resolución de problemas, detección de fallos de seguridad, configuraciones incorrectas, etc.
  • Movimiento: Muchas herramientas mueren por tener un único desarrollador trabajando en sus ratos libres, tener contribuciones es básico para cualquier proyecto.
  • Programar más tests en Magui para identificar más casos dónde los problemas aparecen a nivel de grupos de sistemas y no a nivel de sistema sindividuales.

Otros recursos

Blog posts:

¿Preguntas?

Gracias por asistir!!

Ven a #citellus en Freenode, https://t.me/citellusUG en Telegram o contacta con nosotros:

by Pablo Iranzo Gómez at May 16, 2019 05:30 PM

May 14, 2019

Lars Kellogg-Stedman

A DIY CPAP Battery Box

A year or so ago I was diagnosed with sleep apnea, and since them I've been sleeping with a CPAP. This past weekend, I joined my daughter on a scout camping trip to a campground without readily accessible electricity. This would be the first time I found myself in this situation, and as the date approached, I realized I was going to have to build or buy some sort of battery solution for my CPAP.

May 14, 2019 12:00 AM

May 07, 2019

Lars Kellogg-Stedman

Unpacking a Python regular expression

I recently answered a question from Harsha Nalore on StackOverflow that involved using Ansible to extract the output of a command sent to a BigIP device of some sort. My solution – which I claim to be functional, but probably not optimal – involved writing an Ansible filter module to parse the output. That filter made use of a complex-looking regular expression. Harsha asked for some details on that regular expression works, and the existing StackOverflow answer didn't really seem the write place for that: so, here we are.

May 07, 2019 10:00 AM

New comment system

As long as I'm switching site generators, it seems like a good idea to refresh the comment system as well. I've been using Disqus for a while, since when I started it was one of the only games in town. There are now alternatives of different sorts, and one in particular caught my eye: Utterances uses GitHub issues for storing comments, which seems like a fantastic idea. That means that comments will finally be stored in the same place as the blog content, which I think is a happy state of affairs.

May 07, 2019 09:00 AM

May 06, 2019

Lars Kellogg-Stedman

New static site generator

I've switched my static site generator from Pelican to Hugo. I've tried to ensure that all the old links continue to work correctly, but if you notice anything missing or otherwise not working as intended, please let me know by opening an issue. Thanks!

May 06, 2019 12:00 AM

April 26, 2019

RDO Blog

RDO Stein Released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Stein for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Stein is the 19th release from the OpenStack project, which is the work of more than 1200 contributors from around the world.

The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/7/cloud/x86_64/openstack-stein/.

The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.

All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.

Photo by Yucel Moran on Unsplash

New and Improved

Interesting things in the Stein release include:

  • Ceph Nautilus is the default version of Ceph, a free-software storage platform, implements object storage on a single distributed computer cluster, and provides interfaces for object-, block- and file-level storage, within RDO (or is it the default without OpenStack?). Within Nautilus, the Ceph Dashboard has gained a lot of new functionality like support for multiple users / roles, SSO (SAMLv2) for user authentication, auditing support, a new landing page showing more metrics and health info, I18N support, and REST API documentation with Swagger API.

  • The extracted Placement service, used to track cloud resource inventories and usages to help other services effectively manage and allocate their resources, is now packaged as part of RDO. Placement has added the ability to target a candidate resource provider, easing specifying a host for workload migration, increased API performance by 50% for common scheduling operations, and simplified the code by removing unneeded complexity, easing future maintenance.

Other improvements include:

  • The TripleO deployment service, used to develop and maintain tooling and infrastructure able to deploy OpenStack in production, using OpenStack itself wherever possible, added support for podman and buildah for containers and container images. Open Virtual Network (OVN) is now the default network configuration and TripleO now has improved composable network support for creating L3 routed networks and IPV6 network support.

Contributors

During the Stein cycle, we saw the following new RDO contributors:

  • Sławek Kapłoński
  • Tobias Urdin
  • Lee Yarwood
  • Quique Llorente
  • Arx Cruz
  • Natal Ngétal
  • Sorin Sbarnea
  • Aditya Vaja
  • Panda
  • Spyros Trigazis
  • Cyril Roelandt
  • Pranali Deore
  • Grzegorz Grasza
  • Adam Kimball
  • Brian Rosmaita
  • Miguel Duarte Barroso
  • Gauvain Pocentek
  • Akhila Kishore
  • Martin Mágr
  • Michele Baldessari
  • Chuck Short
  • Gorka Eguileor

Welcome to all of you and Thank You So Much for participating!

But we wouldn’t want to overlook anyone. A super massive Thank You to all 74 contributors who participated in producing this release. This list includes commits to rdo-packages and rdo-infra repositories:

  • yatin
  • Sagi Shnaidman
  • Wes Hayutin
  • Rlandy
  • Javier Peña
  • Alfredo Moralejo
  • Bogdan Dobrelya
  • Sławek Kapłoński
  • Alex Schultz
  • Emilien Macchi
  • Lon
  • Jon Schlueter
  • Luigi Toscano
  • Eric Harney
  • Tobias Urdin
  • Chandan Kumar
  • Nate Johnston
  • Lee Yarwood
  • rabi
  • Quique Llorente
  • Chandan Kumar
  • Luka Peschke
  • Carlos Goncalves
  • Arx Cruz
  • Kashyap Chamarthy
  • Cédric Jeanneret
  • Victoria Martinez de la Cruz
  • Bernard Cafarelli
  • Natal Ngétal
  • hjensas
  • Tristan de Cacqueray
  • Marc Dequènes (Duck)
  • Juan Antonio Osorio Robles
  • Sorin Sbarnea
  • Rafael Folco
  • Nicolas Hicher
  • Michael Turek
  • Matthias Runge
  • Giulio Fidente
  • Juan Badia Payno
  • Zoltan Caplovic
  • agopi
  • marios
  • Ilya Etingof
  • Steve Baker
  • Aditya Vaja
  • Panda
  • Florian Fuchs
  • Martin André
  • Dmitry Tantsur
  • Sylvain Baubeau
  • Jakub Ružička
  • Dan Radez
  • Honza Pokorny
  • Spyros Trigazis
  • Cyril Roelandt
  • Pranali Deore
  • Grzegorz Grasza
  • Bnemec
  • Adam Kimball
  • Haikel Guemar
  • Daniel Mellado
  • Bob Fournier
  • Nmagnezi
  • Brian Rosmaita
  • Ade Lee
  • Miguel Duarte Barroso
  • Alan Bishop
  • Gauvain Pocentek
  • Akhila Kishore
  • Martin Mágr
  • Michele Baldessari
  • Chuck Short
  • Gorka Eguileor

The Next Release Cycle

At the end of one release, focus shifts immediately to the next, Train, which has an estimated GA the week of 14-18 October 2019. The full schedule is available at https://releases.openstack.org/train/schedule.html.

Twice during each release cycle, RDO hosts official Test Days shortly after the first and third milestones; therefore, the upcoming test days are 13-14 June 2019 for Milestone One and 16-20 September 2019 for Milestone Three.

Get Started

There are three ways to get started with RDO.

To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.

For a production deployment of RDO, use the TripleO Quickstart and you’ll be running a production cloud in short order.

Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.

Get Help

The RDO Project participates in a Q&A service at https://ask.openstack.org. We also have our users@lists.rdoproject.org for RDO-specific users and operrators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org.

The #rdo channel on Freenode IRC is also an excellent place to find and give help.

We also welcome comments and requests on the CentOS mailing lists and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues.

Get Involved

To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.

Join us in #rdo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube.

by Rain Leander at April 26, 2019 11:03 PM

Lars Kellogg-Stedman

Adding support for privilege escalation to Ansible's docker connection driver

I often use Docker to test out Ansible playbooks. While normally that works great, I recently ran into an unexpected problem with privilege escalation. Given a simple playbook like this:

---
- hosts: all
  gather_facts: false
  become: true
  tasks:
    - ping:

And an inventory like this:

all:
  vars:
    ansible_user: example
    ansible_connection: docker
  hosts …

by Lars Kellogg-Stedman at April 26, 2019 04:00 AM

Adding support for privilege escalation to Ansible's docker connection driver

Update 2019-05-09 Pull request #55816 has merged, so you can now use sudo with the docker connection driver even when sudo is configured to require a password. I often use Docker to test out Ansible playbooks. While normally that works great, I recently ran into an unexpected problem with privilege escalation. Given a simple playbook like this: --- - hosts: all gather_facts: false become: true tasks: - ping: And an inventory like this:

April 26, 2019 12:00 AM

April 25, 2019

Lars Kellogg-Stedman

Writing Ansible filter plugins

I often see questions from people who are attemping to perform complex text transformations in their Ansible playbooks. While I am a huge fan of Ansible, data transformation is not one of its strong points. For example, this past week someone asked a question on Stack Overflow in which they …

by Lars Kellogg-Stedman at April 25, 2019 04:00 AM

Writing Ansible filter plugins

I often see questions from people who are attemping to perform complex text transformations in their Ansible playbooks. While I am a huge fan of Ansible, data transformation is not one of its strong points. For example, this past week someone asked a question on Stack Overflow in which they were attempting to convert the output of the keytool command into a list of dictionaries. The output of the keytool -list -v command looks something like this:

April 25, 2019 12:00 AM

April 18, 2019

Emilien Macchi

Day 2 operations in OpenStack TripleO (episode 1: scale-down)

Scale-up and scale-down are probably the most common operations done after the initial deployment. Let’s see how they are getting improved. This first episode is about scale-down precisely.

How it works now

Right now when an operator runs “openstack overcloud node delete” command, it’ll update the Heat stack to remove the resources associated to the node(s) that we delete. It can be problematic for some services like Nova, Neutron and the Subscription Manager, which needs to be teared down before the server is deleted.

Proposal

The idea is to create an interface where we can run Ansible tasks which will be executed during the scale-down, before the nodes get deleted by Heat. The Ansible tasks will live near to the deployment / upgrade / … tasks that are in TripleO Heat Templates. Here is an example with Red Hat Subscription Management:

It involves 3 changes:

What’s next?

  • Getting reviews & feedback on the 3 patches
  • Implement scale down tasks for Neutron, Nova and Ceph, waiting for this feature
  • Looking at scale-up tasks

Demo 1

This demo shows a node being unsubscribed when the overcloud is scaled down.

Demo 2

This demo show a compute node being removed from the Overcloud.

by Emilien at April 18, 2019 03:21 PM

March 14, 2019

Adam Young

Building the Kolla Keystone Container

Kolla has become the primary source of Containers for running OpenStack services. Since if has been a while since I tried deliberately running just the Keystone container, I decided to build the Kolla version from scratch and run it.

UPDATE: Ozz wrote it already, and did it better: http://jaormx.github.io/2017/testing-containerized-openstack-services-with-kolla/

I had an clone of the Kolla repo already, but if you need one, you can get it by cloning

git clone git://git.openstack.org/openstack/kolla

All of the dependencies you need to run the build process are handled by tox. Assuming you can run tox elsewhere, you can use that here, too:

tox -e py35

That will run through all the unit tests. They do not take that long.

To build all of the containers you can active the virtual environment and then use the build tool. That takes quite a while, since there are a lot of containers required to run OpenStack.

$ . .tox/py35/bin/activate
(py35) [ayoung@ayoungP40 kolla]$ tools/build.py 

If you want to build just the keystone containers….

 python tools/build.py keystone

Building this with no base containers cached took me 5 minutes. Delta builds should be much faster.

Once the build is complete, you will have a bunch of container images defined on your system:

REPOSITORY TAG IMAGE ID CREATEDSIZE
kolla/centos-binary-keystone 7.0.2 69049739bad6 33 minutes ago 800 MB
kolla/centos-binary-keystone-fernet 7.0.2 89977265fcbb 33 minutes ago 800 MB
kolla/centos-binary-keystone-ssh 7.0.2 4b377e854980 33 minutes ago 819 MB
kolla/centos-binary-barbican-keystone-listener 7.0.2 6265d0acff16 33 minutes ago 732 MB
kolla/centos-binary-keystone-base 7.0.2 b6d78b9e0769 33 minutes ago 774 MB
kolla/centos-binary-barbican-base 7.0.2 ccd7b4ff311f 34 minutes ago 706 MB
kolla/centos-binary-openstack-base 7.0.2 38dbb3c57448 34 minutes ago 671 MB
kolla/centos-binary-base 7.0.2 177c786e9b01 36 minutes ago 419 MB
docker.io/centos 7 1e1148e4cc2c 3 months ago 202 MB

Note that the build instructions live in the git repo under docs.

by Adam Young at March 14, 2019 03:43 PM

February 24, 2019

Lars Kellogg-Stedman

Docker build learns about secrets and ssh agent forwarding

A common problem for folks working with Docker is accessing resources which require authentication during the image build step. A particularly common use case is getting access to private git repositories using ssh key-based authentication. Until recently there hasn't been a great solution:

  • you can embed secrets in your image …

by Lars Kellogg-Stedman at February 24, 2019 05:00 AM

Docker build learns about secrets and ssh agent forwarding

A common problem for folks working with Docker is accessing resources which require authentication during the image build step. A particularly common use case is getting access to private git repositories using ssh key-based authentication. Until recently there hasn't been a great solution: you can embed secrets in your image, but now you can't share the image with anybody. you can use build arguments, but this requires passing in an unenecrypted private key on the docker build command line, which is suboptimal for a number of reasons you can perform all the steps requiring authentication at runtime, but this can needlessly complicate your container startup process.

February 24, 2019 12:00 AM

February 12, 2019

Emilien Macchi

OpenStack Containerization with Podman – Part 5 (Image Build)

For this fifth episode, we’ll explain how we will build containers with Buildah. Don’t miss the first, secondthird and fourth episodes where we learnt how to deploy, operate, upgrade and monitor Podman containers.

In this post, we’ll see the work that we can replace Docker by Buildah to build our container images.

Context

In OpenStack TripleO, we have nearly 150 images (all layers included) for all the services that we can deploy. Of course you don’t need to build them all when deploying your OpenStack cloud, but in our production chain we build them all and push the images to a container registry, consumable by the community.

Historically, we have been using “kolla-build” and the process to leverage the TripleO images build is documented here.

Implementation

kolla-build only supports Docker CLI at this time and we recognized that changing its code to support something else sounded a painful plan, as Docker was hardcoded almost everywhere.

We decided to leverage kolla-build to generate the templates of the images, which is actually a tree of Dockerfile per container.

The dependencies format generated by Kolla is a JSON:

So what we do is that when running:

openstack overcloud container image build --use-buildah

We will call kolla-build with –list-dependencies that generates a directory per image, where we have a Dockerfile + other things needed during the builds.

Anyway, bottom line is: we still use Kolla to generate our templates but don’t want Docker to actually build the images.

In tripleo-common, we are implementing a build and push that will leverage “buildah bud” and “buildah push”.

buildah bud” is a good fit for us because it allows us to use the same logic and format as before with Docker (bud == build-using-dockerfile).

The main challenge for us is that our images aren’t small, and we have a lot of images to build, in our production chain. So we decided to parallelize the last layers of the images (which don’t have childs).

For example, 2 images at the same layer level will be built together, also a child won’t be built in parallel of its parent layer.

Here is a snippet of the code that will take the dependencies dictionary and build our containers:

Without the “fat daemon” that is Docker, using Buildah puts some challenges here where running multiple builds at the same time can be slow because of the locks to avoid race conditions and database corruptions. So we capped the number of workers to 8, to not make Buildah locking too hard on the system.

What about performances? This question is still under investigation. We are still testing our code and measuring how much time it takes to build our images with Buildah. One thing is sure, you don’t want to use vfs storage backend and use overlayfs. To do so, you’ll need to run at least Fedora 28 with 4.18 kernel, install fuse-overlayfs and Buildah should use this backend by default.

Demo

Please select full screen:

In the next episode, we’ll see how we are replacing the docker-registry by a simple web server. Stay tuned!

by Emilien at February 12, 2019 03:22 AM

February 11, 2019

RDO Blog

python-tempestconf's journey

For those who are not familiar with the python-tempestconf, it’s a tool for generating a tempest configuration file, which is required for running Tempest tests against a live OpenStack cluster. It queries a cloud and automatically discovers cloud settings, which weren’t provided by a user.

Internal project

In August 2016 config_tempest tool was decoupled from Red Hat Tempest fork and the python-tempestconf repository under the github redhat-openstack organization was created. The tool became an internal tool used for generating tempest.conf in downstream jobs which were running Tempest.

Why we like `python-tempestconf`

The reason why is quite easy. We at Red Hat were (and still are) running many different OpenStack jobs with different configurations which execute Tempest. And there python-tempestconf stepped in. We didn’t have to implement the logic for creating or modifying tempest.conf within the job configuration, we just used python-tempestconf which did that for us. It’s not only about the generating tempest.conf itself, because the tool also creates basic users, uploads an image and creates basic flavors which all of them are required for running Tempest tests.

Usage of python-tempestconf was also beneficial for engineers who liked the idea of not struggling with creating a tempest.conf file from scratch but rather using the tool which was able to generate it for them. The generated tempest.conf was sufficient for running simple Tempest tests.

Imagine you have a fresh OpenStack deployment and you want to run some Tempest tests, because you want to make sure that the deployment was successful. In order to do that, you can run the python-tempestconf which will do the basic configuration for you and will generate a tempest.conf, and execute Tempest. That’s it, isn’t it easy?

I have to admit, when I joined Red Hat and more specifically OpenStack team, I kind of struggled with all the information about OpenStack and Tempest, it was too much new information. Therefore I really liked when I could generate a tempest.conf which I could use for running just basic tests. If I had to generate the tempest.conf myself, my learning process would be a little bit slower. Therefore, I’m really grateful that we had the tool at that time.

Shipping in a package

At the beginning of 2017 we started to ship python-tempestconf rpm package. It’s available in RDO repositories from Ocata and higher. python-tempestconf package is also installed as a dependency of openstack-tempest package. So if a user installs openstack-tempest, also python-tempestconf will be installed. At this time, we also changed the entrypoint and the tool is executed via discover-tempest-config command. However, you could have already read all about it in this article.

Upstream project

By the end of 2017 python-tempestconf became an upstream project and got under OpenStack organization.

We have significantly improved the tool since then, not only its code but also its documentation, which contains all the required information for a user, see here. In my opinion every project which is designed for wider audience of users (python-tempestconf is an upstream project, so this condition is fulfilled), should have a proper documentation. Following python-tempestconf’s documentation should be any user able to execute it, set wanted arguments and set some special tempest options without any bigger problems.

I would say that there are 3 greatest improvements. One of them is the user documentation, which I’ve already mentioned. The second and third are improvements of the code itself and they are os-client-config integration and refactoring of the code in order to simplify adding new OpenStack services the tool can generate config for.

os-client-config is a library for collecting client configuration for using an OpenStack cloud in a consistent way. By importing the library a user can specify OpenStack credentials by 2 different ways:

  • Using OS_* environment variables, which is maybe the most common way. It requires sourcing credentials before running python-tempestconf. In case of packstack environment, it’s keystonerc_admin/demo file and in case of devstack there is openrc script.
  • Using --os-cloud parameter which takes one argument – name of the cloud which holds the required credentials. Those are stored in a cloud.yaml file.

The second code improvement was the simplification of adding new OpenStack services the tool can generate tempest.conf for. If you want to add a service, just create a bug in our storyboard, see python-tempestconf’s contributor guide. If you feel like it, you can also implement it. Adding a new service requires creating a new file, representing the service and implementing a few required methods.

To conclude

The tool has gone through major refactoring and got significantly improved since it was moved to its own repository in August 2016. If you’re a Tempest user, I’d recommend you try python-tempestconf if you haven’t already.

by mkopec at February 11, 2019 02:50 PM

Lars Kellogg-Stedman

In which I PEBKAC so you don't have to

Say you have a simple bit of code:

#include <avr/io.h>
#include <util/delay.h>

#define LED_BUILTIN _BV(PORTB5)

int main(void) 
{
    DDRB |= LED_BUILTIN;

    while (1)
    {
        PORTB |= LED_BUILTIN;   // turn on led
        _delay_ms(1000);        // delay 1s

        PORTB &= ~LED_BUILTIN;  // turn off led
        _delay_ms(1000);        // delay 1s
    }                                                
}

You have a Makefile that …

by Lars Kellogg-Stedman at February 11, 2019 05:00 AM

In which I PEBKAC so you don't have to

Say you have a simple bit of code: #include <avr/io.h> #include <util/delay.h> #define LED_BUILTIN _BV(PORTB5) int main(void) { DDRB |= LED_BUILTIN; while (1) { PORTB |= LED_BUILTIN; // turn on led _delay_ms(1000); // delay 1s PORTB &= ~LED_BUILTIN; // turn off led _delay_ms(1000); // delay 1s } } You have a Makefile that compiles that into an object (.o) file like this: avr-gcc -mmcu=atmega328p -DF_CPU=16000000 -Os -c blink.c If you were to forget to set the device type when compiling your .

February 11, 2019 12:00 AM

February 05, 2019

Carlos Camacho

TripleO - Deployment configurations

This post is a summary of the deployments I usually test for deploying TripleO using quickstart.

The following steps need to run in the Hypervisor node in order to deploy both the Undercloud and the Overcloud.

You need to execute them one after the other, the idea of this recipe is to have something just for copying/pasting.

Once the last step ends you can/should be able to connect to the Undercloud VM to start operating your Overcloud deployment.

The usual steps are:

01 - Prepare the hypervisor node.

Now, let’s install some dependencies. Same Hypervisor node, same root user.

# In this dev. env. /var is only 50GB, so I will create
# a sym link to another location with more capacity.
# It will take easily more tan 50GB deploying a 3+1 overcloud
sudo mkdir -p /home/libvirt/
sudo ln -sf /home/libvirt/ /var/lib/libvirt

# Disable IPv6 lookups
# sudo bash -c "cat >> /etc/sysctl.conf" << EOL
# net.ipv6.conf.all.disable_ipv6 = 1
# net.ipv6.conf.default.disable_ipv6 = 1
# EOL
# sudo sysctl -p

# Enable IPv6 in kernel cmdline
# sed -i s/ipv6.disable=1/ipv6.disable=0/ /etc/default/grub
# grub2-mkconfig -o /boot/grub2/grub.cfg
# reboot

sudo yum groupinstall "Virtualization Host" -y
sudo yum install git lvm2 lvm2-devel -y
sudo yum install libvirt-python python-lxml libvirt -y

02 - Create the toor user (from the Hypervisor node, as root).

sudo useradd toor
echo "toor:toor" | sudo chpasswd
echo "toor ALL=(root) NOPASSWD:ALL" \
  | sudo tee /etc/sudoers.d/toor
sudo chmod 0440 /etc/sudoers.d/toor
sudo su - toor

cd
mkdir .ssh
ssh-keygen -t rsa -N "" -f .ssh/id_rsa
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
cat .ssh/id_rsa.pub | sudo tee -a /root/.ssh/authorized_keys
echo '127.0.0.1 127.0.0.2' | sudo tee -a /etc/hosts

export VIRTHOST=127.0.0.2
ssh root@$VIRTHOST uname -a

Now, follow as the toor user and prepare the Hypervisor node for the deployment.

03 - Clone repos and install deps.

git clone \
  https://github.com/openstack/tripleo-quickstart
chmod u+x ./tripleo-quickstart/quickstart.sh
bash ./tripleo-quickstart/quickstart.sh \
  --install-deps
sudo setenforce 0

Export some variables used in the deployment command.

04 - Export common variables.

export CONFIG=~/deploy-config.yaml
export VIRTHOST=127.0.0.2

Now we will create the configuration file used for the deployment, depending on the file you choose you will deploy different environments.

05 - Click on the environment description to expand the recipe.

OpenStack [Containerized & HA] - 1 Controller, 1 Compute

cat > $CONFIG << EOF
overcloud_nodes:
  - name: control_0
    flavor: control
    virtualbmc_port: 6230
  - name: compute_0
    flavor: compute
    virtualbmc_port: 6231
node_count: 2
containerized_overcloud: true
delete_docker_cache: true
enable_pacemaker: true
run_tempest: false
extra_args: >-
  --libvirt-type qemu
  --ntp-server pool.ntp.org
  -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml
EOF
OpenStack [Containerized & HA] - 3 Controllers, 1 Compute

cat > $CONFIG << EOF
overcloud_nodes:
  - name: control_0
    flavor: control
    virtualbmc_port: 6230
  - name: control_1
    flavor: control
    virtualbmc_port: 6231
  - name: control_2
    flavor: control
    virtualbmc_port: 6232
  - name: compute_1
    flavor: compute
    virtualbmc_port: 6233
node_count: 4
containerized_overcloud: true
delete_docker_cache: true
enable_pacemaker: true
run_tempest: false
extra_args: >-
  --libvirt-type qemu
  --ntp-server pool.ntp.org
  --control-scale 3
  --compute-scale 1
  -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml
EOF
OpenShift [Containerized] - 1 Controller, 1 Compute

cat > $CONFIG << EOF
# Original from https://github.com/openstack/tripleo-quickstart/blob/master/config/general_config/featureset033.yml
composable_scenario: scenario009-multinode.yaml
deployed_server: true

network_isolation: false
enable_pacemaker: false
overcloud_ipv6: false
containerized_undercloud: true
containerized_overcloud: true

# This enables TLS for the undercloud which will also make haproxy bind to the
# configured public-vip and admin-vip.
undercloud_generate_service_certificate: false
undercloud_enable_validations: false

# This enables the deployment of the overcloud with SSL.
ssl_overcloud: false

# Centos Virt-SIG repo for atomic package
add_repos:
  # NOTE(trown) The atomic package from centos-extras does not work for
  # us but its version is higher than the one from the virt-sig. Hence,
  # using priorities to ensure we get the virt-sig package.
  - type: package
    pkg_name: yum-plugin-priorities
  - type: generic
    reponame: quickstart-centos-paas
    filename: quickstart-centos-paas.repo
    baseurl: https://cbs.centos.org/repos/paas7-openshift-origin311-candidate/x86_64/os/
  - type: generic
    reponame: quickstart-centos-virt-container
    filename: quickstart-centos-virt-container.repo
    baseurl: https://cbs.centos.org/repos/virt7-container-common-candidate/x86_64/os/
    includepkgs:
      - atomic
    priority: 1

extra_args: ''

container_args: >-
  # If Pike or Queens
  #-e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml
  # If Ocata, Pike, Queens or Rocky
  #-e /home/stack/containers-default-parameters.yaml
  # If >= Stein
  -e /home/stack/containers-prepare-parameter.yaml

  -e /usr/share/openstack-tripleo-heat-templates/openshift.yaml
# NOTE(mandre) use container images mirrored on the dockerhub to take advantage
# of the proxy setup by openstack infra
docker_openshift_etcd_namespace: docker.io/
docker_openshift_cluster_monitoring_namespace: docker.io/tripleomaster
docker_openshift_cluster_monitoring_image: coreos-cluster-monitoring-operator
docker_openshift_configmap_reload_namespace: docker.io/tripleomaster
docker_openshift_configmap_reload_image: coreos-configmap-reload
docker_openshift_prometheus_operator_namespace: docker.io/tripleomaster
docker_openshift_prometheus_operator_image: coreos-prometheus-operator
docker_openshift_prometheus_config_reload_namespace: docker.io/tripleomaster
docker_openshift_prometheus_config_reload_image: coreos-prometheus-config-reloader
docker_openshift_kube_rbac_proxy_namespace: docker.io/tripleomaster
docker_openshift_kube_rbac_proxy_image: coreos-kube-rbac-proxy
docker_openshift_kube_state_metrics_namespace: docker.io/tripleomaster
docker_openshift_kube_state_metrics_image: coreos-kube-state-metrics

deploy_steps_ansible_workflow: true
config_download_args: >-
  -e /home/stack/config-download.yaml
  --disable-validations
  --verbose
composable_roles: true

overcloud_roles:
  - name: Controller
    CountDefault: 1
    tags:
      - primary
      - controller
    networks:
      - External
      - InternalApi
      - Storage
      - StorageMgmt
      - Tenant
  - name: Compute
    CountDefault: 0
    tags:
      - compute
    networks:
      - External
      - InternalApi
      - Storage
      - StorageMgmt
      - Tenant

tempest_config: false
test_ping: false
run_tempest: false
EOF


From the Hypervisor, as the toor user run the deployment command to deploy both your Undercloud and Overcloud.

06 - Deploy TripleO.

bash ./tripleo-quickstart/quickstart.sh \
      --clean          \
      --release master \
      --teardown all   \
      --tags all       \
      -e @$CONFIG      \
      $VIRTHOST

Updated 2019/02/05: Initial version.

Updated 2019/02/05: TODO: Test the OpenShift deployment.

Updated 2019/02/06: Added some clarifications about where the commands should run.

by Carlos Camacho at February 05, 2019 12:00 AM

February 04, 2019

Matthias Runge

Collectd contributors meetup in Brussels

Last week, on January 31 and Feb 01, we had a contributors meetup of collectd in Brussels, just before FOSDEM. Thanks to our generous sponsors Camptocamp, Intel, and Red Hat, we were able to meet and to talk about various topics. In total, there were 15 persons attending, and if …

by mrunge at February 04, 2019 10:40 AM