Planet RDO

October 27, 2021

RDO Blog

RDO Xena Released

RDO Xena Released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Xena for RPM-based distributions, CentOS Stream and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Xena is the 24th release from the OpenStack project, which is the work of more than 1,000 contributors from around the world.

 

The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/8-stream/cloud/x86_64/openstack-xena/.

The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Stream and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS users looking to build and maintain their own on-premise, public or hybrid clouds.

All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.
PLEASE NOTE: RDO Xena provides packages for CentOS Stream 8 only. Please use the Victoria release for CentOS Linux 8 which will reach End Of Life (EOL) on December 31st, 2021 (https://www.centos.org/centos-linux-eol/).

Interesting things in the Xena release include:
  • The python-oslo-limit package has been added to RDO. This is the limit enforcement library which assists with quota calculation. Its aim is to provide support for quota enforcement across all OpenStack services.
  • The glance-tempest-plugin package has been added to RDO. This package provides a set of functional tests to validate Glance using the Tempest framework.
  • TripleO has been moved to an independent release model (see section TripleO in the RDO Xena release).

The highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/xena/highlights.html

 

TripleO in the RDO Xena release:
In the Xena development cycle, TripleO has moved to an Independent release model (https://specs.openstack.org/openstack/tripleo-specs/specs/xena/tripleo-independent-release.html) and will only maintain branches for selected OpenStack releases. In the case of Xena, TripleO will not support the Xena release. For TripleO users in RDO, this means that:
  • RDO Xena will include packages for TripleO tested at OpenStack Xena GA time.
  • Those packages will not be updated during the entire Xena maintenance cycle.
  • RDO will not be able to included patches required to fix bugs in TripleO on RDO Xena.
  • The lifecycle for the non-TripleO packages will follow the code merged and tested in upstream stable/Xena branches.
  • There will not be any TripleO Xena container images built/pushed, so interested users will have to do their own container builds when deploying Xena.
You can find details about this on the RDO webpage
Contributors

During the Xena cycle, we saw the following new RDO contributors:

  • Chris Sibbitt
  • Gregory Thiemonge
  • Julia Kreger
  • Leif Madsen
Welcome to all of you and Thank You So Much for participating!
But we wouldn’t want to overlook anyone. A super massive Thank You to all 41 contributors who participated in producing this release. This list includes commits to rdo-packages, rdo-infra, and redhat-website repositories:
  • Alan Bishop
  • Alan Pevec
  • Alex Schultz
  • Alfredo Moralejo
  • Amy Marrich (spotz)
  • Bogdan Dobrelya
  • Chandan Kumar
  • Chris Sibbitt
  • Damien Ciabrini
  • Dmitry Tantsur
  • Eric Harney
  • Gaël Chamoulaud
  • Giulio Fidente
  • Goutham Pacha Ravi
  • Gregory Thiemonge
  • Grzegorz Grasza
  • Harald Jensas
  • James Slagle
  • Javier Peña
  • Jiri Podivin
  • Joel Capitao
  • Jon Schlueter
  • Julia Kreger
  • Lee Yarwood
  • Leif Madsen
  • Luigi Toscano
  • Marios Andreou
  • Mark McClain
  • Martin Kopec
  • Mathieu Bultel
  • Matthias Runge
  • Michele Baldessari
  • Pranali Deore
  • Rabi Mishra
  • Riccardo Pittau
  • Sagi Shnaidman
  • Sławek Kapłoński
  • Steve Baker
  • Takashi Kajinami
  • Wes Hayutin
  • Yatin Karel

 

The Next Release Cycle

At the end of one release, focus shifts immediately to the next release i.e Yoga.

Get Started

To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.

Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.

Get Help

The RDO Project has our users@lists.rdoproject.org for RDO-specific users and operators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org.

The #rdo channel on OFTC IRC is also an excellent place to find and give help.

We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel in Libera.Chat network, and #tripleo on OFTC), however we have a more focused audience within the RDO venues.

Get Involved

To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.

Join us in #rdo and #tripleo on the OFTC IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube.

by Amy Marrich at October 27, 2021 02:59 PM

September 30, 2021

Adam Young

Legible Error traces from openstack server show

If an OpenStack server (Ironic or Nova) has an error, it shows up in a nested field. That field is hard to read in its normal layout, due to JSON formatting. Using jq to strip the formatting helps a bunch

The nested field is fault.details.

The -r option strips off the quotes.

[ayoung@ayoung-home scratch]$ openstack server show oracle-server-84-aarch64-vm-small -f json | jq -r '.fault | .details'
Traceback (most recent call last):
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/compute/manager.py", line 2437, in _build_and_run_instance
    block_device_info=block_device_info)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/virt/libvirt/driver.py", line 3458, in spawn
    block_device_info=block_device_info)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/virt/libvirt/driver.py", line 3831, in _create_image
    fallback_from_host)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/virt/libvirt/driver.py", line 3922, in _create_and_inject_local_root
    instance, size, fallback_from_host)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/virt/libvirt/driver.py", line 9243, in _try_fetch_image_cache
    trusted_certs=instance.trusted_certs)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/virt/libvirt/imagebackend.py", line 275, in cache
    *args, **kwargs)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/virt/libvirt/imagebackend.py", line 642, in create_image
    self.verify_base_size(base, size)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/virt/libvirt/imagebackend.py", line 331, in verify_base_size
    flavor_size=size, image_size=base_size)
nova.exception.FlavorDiskSmallerThanImage: Flavor's disk is too small for requested image. Flavor disk is 21474836480 bytes, image is 34359738368 bytes.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/compute/manager.py", line 2161, in _do_build_and_run_instance
    filter_properties, request_spec)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/compute/manager.py", line 2525, in _build_and_run_instance
    reason=e.format_message())
nova.exception.BuildAbortException: Build of instance 5281b93a-0c3c-4d38-965d-568d79abb530 aborted: Flavor's disk is too small for requested image. Flavor disk is 21474836480 bytes, image is 34359738368 bytes.

by Adam Young at September 30, 2021 08:13 PM

Debugging a Clean Failure in Ironic

My team is running a small OpenStack cluster with reposnsibility for providing bare metal nodes via Ironic. Currently, we have a handful of nodes that are not usable. They show up as “Cleaning failed.” I’m learning how to debug this process.

Tools

The following ipmtool commands allow us to set the machine to PXE boot, remote power cycle the machine, and view what happens during the boot process.

Power stuff:

ipmitool -H $H -U $U -I lanplus -P $P chassis power status
ipmitool -H $H -U $U -I lanplus -P $P chassis power on
ipmitool -H $H -U $U -I lanplus -P $P chassis power off
ipmitool -H $H -U $U -I lanplus -P $P chassis power cycle

Serial over LAN (SOL)

ipmitool -H $H -U $U -I lanplus -P $P sol activate

PXE Boot

ipmitool -H $H -U $U -I lanplus -P $P chassis bootdev pxe
#Set Boot Device to pxe

Conductor Log

To tail the log and only see entries relevant to the UUID of the node I am cleaning:

tail -f /var/log/kolla/ironic/ironic-conductor.log | grep $UUID

OpenStack baremetal node commands

What is the IPMI address for a node?

openstack baremetal node show fab1bcf7-a7fc-4c19-9d1d-fc4dbc4b2281 -f json | jq '.driver_info | .ipmi_address'
"10.76.97.171"

Cleaning Commands

We have a script that prepares the PXE server to accept a cleaning request from a node. It performs the following three actions (don’t do these yet):

 
 openstack baremetal node maintenance unset ${i}
 openstack baremetal node manage ${i}
 openstack baremetal node provide ${i}

Getting ipmi addresses for nodes

To look at the IPM power status (and confirm that IPMI is set up right for the nodes)

for node in `openstack baremetal node list -f json | jq -r '.[] | select(."Provisioning State"=="clean failed")  | .UUID' ` ; 
do    
echo $node ; 
METAL_IP=`openstack baremetal node show  $node -f json | jq -r  '.driver_info | .ipmi_address' ` ; 
echo $METAL_IP  ; 
ipmitool -I lanplus -H  $METAL_IP  -L ADMINISTRATOR -U admin -R 12 -N 5 -P admin chassis power status   ; 
done 

Yes, I did that all on one line, hence the semicolons.

A couple other one liners. This selects all active nodes and gives you their node id and ipmi IP address.

for node in `openstack baremetal node list -f json | jq -r '.[] | select(."Provisioning State"=="active")  | .UUID' ` ; do echo $node ;  openstack baremetal node show  $node -f json | jq -r  '.driver_info | .ipmi_address' ;done

And you can swap out active with other values. For example, if you want to see what nodes are in either error or clean failed states:

openstack baremetal node list -f json | jq -r '.[] | select(."Provisioning State"=="error" or ."Provisioning State"=="manageable")  | .UUID'

Troubleshooting

PXE outside of openstack

If I want to ensure I can PXE boot, out side of the openstack operations, in one terminal, I can track the state in a console. I like to have this running in a dedicated terminal: open the SOL.

ipmitool -H 10.76.97.176 -U ADMIN -I lanplus -P ADMIN sol activate

and in another, set the machine to PXE boot, then power cycle it:

ipmitool -H 10.76.97.176 -U ADMIN -I lanplus -P ADMIN chassis bootdev pxe
Set Boot Device to pxe
[ayoung@ayoung-home keystone]$ ipmitool -H 10.76.97.176 -U ADMIN -I lanplus -P ADMIN chassis power cycle
Chassis Power Control: Cycle

If the Ironic server is not ready to accept the PXE request, your server will let you know with a message like this one:

>>Checking Media Presence......
>>Media Present......
>>Start PXE over IPv4 on MAC: 1C-34-DA-51-D6-C0.
PXE-E18: Server response timeout.

ERROR: Boot option loading failed

PXE inside of a clean

openstack baremetal node list --provision-state "clean failed"  -f value -c UUID

Produces output like this:

8470e638-0085-470c-9e51-b2ed016569e1
5411e7e8-8113-42d6-a966-8cacd1554039
08c14110-88aa-4e45-b5c1-4054ac49115a
3f5f510c-a313-4e40-943a-366917ec9e44

Clean wait log entries

I’ll track what is going on in the log for a specific node by running tail -f and grepping for the uuid of the node:

tail -f /var/log/kolla/ironic/ironic-conductor.log | grep 5411e7e8-8113-42d6-a966-8cacd1554039

If you run the three commands I showed above, the Ironic server should be prepared for cleaning and will accept the PXE request. I can execute these one at a time and track the state in the conductor log. If I kick off a clean, eventually, I see entries like this in the conductor log (I’m removing the time stamps and request ids for readability):

ERROR ironic.conductor.task_manager [] Node 5411e7e8-8113-42d6-a966-8cacd1554039 moved to provision state "clean failed" from state "clean wait"; target provision state is "available"
INFO ironic.conductor.utils [] Successfully set node 5411e7e8-8113-42d6-a966-8cacd1554039 power state to power off by power off.
INFO ironic.drivers.modules.network.flat [] Removing ports from cleaning network for node 5411e7e8-8113-42d6-a966-8cacd1554039
INFO ironic.common.neutron [] Successfully removed node 5411e7e8-8113-42d6-a966-8cacd1554039 neutron ports.

Manual abort

And I can trigger this manually if a run is taking too long by running:

openstack baremetal node abort  $UUID

Kick off clean process

The command to kick off the clean process is

openstack baremetal node provide $UUID

In the conductor log, that should show messages like this (again, edited for readability)

Node 5411e7e8-8113-42d6-a966-8cacd1554039 moved to provision state "cleaning" from state "manageable"; target provision state is "available"
Adding cleaning network to node 5411e7e8-8113-42d6-a966-8cacd1554039
For node 5411e7e8-8113-42d6-a966-8cacd1554039 in network de931fcc-32a0-468e-8691-ffcb43bf9f2e, successfully created ports (ironic ID: neutron ID): {'94306ff5-5cd4-4fdd-a33e-a0202c34d3d0': 'd9eeb64d-468d-4a9a-82a6-e70d54b73e62'}.
Successfully set node 5411e7e8-8113-42d6-a966-8cacd1554039 power state to power on by rebooting.
Node 5411e7e8-8113-42d6-a966-8cacd1554039 moved to provision state "clean wait" from state "cleaning"; target provision state is "available"

PXE during a clean

At this point, the most interesting thing is to see what is happening on the node. ipmiptool sol activate provides a running log. If you are lucky, the PXE process kicks off and a debian-based kernel should start booting. My company has a specific login set for the machines:

debian login: ampere Password: Linux debian 5.10.0-6-arm64 #1 SMP Debian 5.10.28-1 (2021-04-09) aarch64

Debugging on the Node

After this, I use sudo -i to run as root.

$ sudo -i 
...
# ps -ef | grep ironic
root        2369       1  1 14:26 ?        00:00:02 /opt/ironic-python-agent/bin/python3 /usr/local/bin/ironic-python-agent --config-dir /etc/ironic-python-agent.d/

Looking for logs:

ls /var/log/
btmp	ibacm.log  opensm.0x9a039bfffead6720.log  private
chrony	lastlog    opensm.0x9a039bfffead6721.log  wtmp

No ironic log. Is this thing even on the network?

# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0f0np0:  mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 98:03:9b:ad:67:20 brd ff:ff:ff:ff:ff:ff
3: enp1s0f1np1:  mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 98:03:9b:ad:67:21 brd ff:ff:ff:ff:ff:ff
4: enxda90910dd11e:  mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether da:90:91:0d:d1:1e brd ff:ff:ff:ff:ff:ff

Nope. Ok, lets get it on the network:

# dhclient
[  486.508054] mlx5_core 0000:01:00.1 enp1s0f1np1: Link down
[  486.537116] mlx5_core 0000:01:00.1 enp1s0f1np1: Link up
[  489.371586] mlx5_core 0000:01:00.0 enp1s0f0np0: Link down
[  489.394050] IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready
[  489.400646] mlx5_core 0000:01:00.0 enp1s0f0np0: Link up
[  489.406226] IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready
root@debian:~# [  500.596626] sr 0:0:0:0: [sr0] CDROM not ready.  Make sure there is a disc in the drive.
ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0f0np0:  mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 98:03:9b:ad:67:20 brd ff:ff:ff:ff:ff:ff
    inet 192.168.97.178/24 brd 192.168.97.255 scope global dynamic enp1s0f0np0
       valid_lft 86386sec preferred_lft 86386sec
    inet6 fe80::9a03:9bff:fead:6720/64 scope link 
       valid_lft forever preferred_lft forever
3: enp1s0f1np1:  mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 98:03:9b:ad:67:21 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::9a03:9bff:fead:6721/64 scope link 
       valid_lft forever preferred_lft forever
4: enxda90910dd11e:  mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
    link/ether da:90:91:0d:d1:1e brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d890:91ff:fe0d:d11e/64 scope link 
       valid_lft forever preferred_lft forever

And…quite shortly thereafter in the conductor log:

Agent on node 5411e7e8-8113-42d6-a966-8cacd1554039 returned cleaning command success, moving to next clean step
Node 5411e7e8-8113-42d6-a966-8cacd1554039 moved to provision state "cleaning" from state "clean wait"; target provision state is "available"
Executing cleaning on node 5411e7e8-8113-42d6-a966-8cacd1554039, remaining steps: []
Successfully set node 5411e7e8-8113-42d6-a966-8cacd1554039 power state to power off by power off.
Removing ports from cleaning network for node 5411e7e8-8113-42d6-a966-8cacd1554039
Successfully removed node 5411e7e8-8113-42d6-a966-8cacd1554039 neutron ports.
Node 5411e7e8-8113-42d6-a966-8cacd1554039 cleaning complete
Node 5411e7e8-8113-42d6-a966-8cacd1554039 moved to provision state "available" from state "cleaning"; target provision state is "None"

Cause of Failure

So, in our case, the issue seems to be that the IPA image does not have dhcp enabled.

by Adam Young at September 30, 2021 04:30 PM

September 27, 2021

John Likes OpenStack

OpenInfra Live Episode 24: OpenStack and Ceph

This Thursday at 14:00 UTC Francesco and I will be in a panel on OpenInfra Live Episode 24: OpenStack and Ceph.

by Unknown (noreply@blogger.com) at September 27, 2021 10:19 PM

September 05, 2021

Lars Kellogg-Stedman

A pair of userscripts for cleaning up Stack Exchange sites

I’ve been a regular visitor to Stack Overflow and other Stack Exchange sites over the years, and while I’ve mostly enjoyed the experience, I’ve been frustrated by the lack of control I have over what questions I see. I’m not really interested in looking at questions that have already been closed, or that have a negative score, but there’s no native facility for filtering questions like this. I finally spent the time learning just enough JavaScript to hurt myself to put together a pair of scripts that let me present the questions that way I want:

September 05, 2021 12:00 AM

September 03, 2021

Lars Kellogg-Stedman

Kubernetes External Secrets

At $JOB we maintain the configuration for our OpenShift clusters in a public git repository. Changes in the git repository are applied automatically using ArgoCD and Kustomize. This works great, but the public nature of the repository means we need to find a secure solution for managing secrets (such as passwords and other credentials necessary for authenticating to external services). In particular, we need a solution that permits our public repository to be the source of truth for our cluster configuration, without compromising our credentials.

September 03, 2021 12:00 AM

August 23, 2021

Lars Kellogg-Stedman

Connecting OpenShift to an External Ceph Cluster

Red Hat’s [OpenShift Data Foundation][ocs] (formerly “OpenShift Container Storage”, or “OCS”) allows you to either (a) automatically set up a Ceph cluster as an application running on your OpenShift cluster, or (b) connect your OpenShift cluster to an externally managed Ceph cluster. While setting up Ceph as an OpenShift application is a relatively polished experienced, connecting to an external cluster still has some rough edges. NB I am not a Ceph expert.

August 23, 2021 12:00 AM

July 12, 2021

Website and blog of Jiří Stránský

Introduction to OS Migrate

OS Migrate is a toolbox for content migration (workloads and more) between OpenStack clouds. Let’s dive into why you’d use it, some of its most notable features, and a bit of how it works.

The Why

Why move cloud content between OpenStacks? Imagine these situations:

  • Old cloud hardware is obsolete, you’re buying new. A new green field deployment will be easier than gradual replacement of hardware in the original cloud.

  • You want to make fundamental changes to your OpenStack deployment, that would be difficult or risky to perform on a cloud which is already providing service to users.

  • You want to upgrade to a new release of OpenStack, but you want to cut down on associated cloud-wide risk, or you can’t schedule cloud-wide control plane downtime.

  • You want to upgrade to a new release of OpenStack, but the cloud users should be given a choice when to stop using the old release and start using the new.

  • A combination of the above.

In such situations, running (at least) two clouds in parallel for a period of time is often the preferable path. And when you run parallel clouds, perhaps with the intention of decomissioning some of them eventually, a tool may come in handy to copy/migrate the content that users have created (virtual networks, routers, security groups, machines, block storage, images etc.) from one cloud to another. This is what OS Migrate is for.

The Pitch

Now we know OS Migrate copies/moves content from one OpenStack to another. But there is more to say. Some of the design decisions that went into OS Migrate should make it a tool of choice:

  • Uses standard OpenStack APIs. You don’t need to install any plugins into your clouds before using OS Migrate, and OS Migrate does not need access to the backends of your cloud (databases etc.).

  • Runnable with tenant privileges. For moving tenant-owned content, OS Migrate only needs tenant credentials (not administrative credentials). This naturally reduces risks associated with the migration.

    If desired, cloud tenants can even use OS Migrate on their own. Cloud admins do not necessarily need to get involved.

    Admin credentials are only needed when the content being migrated requires admin privileges to be created (e.g. public Glance images).

  • Transparent. The metadata of exported content is in human-readable YAML files. You can inspect what has been exported from the source cloud, and tweak it if necessary, before executing the import into the destination cloud.

  • Stateless. There is no database in OS Migrate that could get out of sync with reality. The source of migration information are the human readable YAML files. ID-to-ID mappings are not kept, entry-point resources are referred to by names.

  • Idempotent. In case of an issue, fix the root cause and re-run, be it export or import. OS Migrate has mechanisms against duplicit exports and duplicit imports.

  • Cherry-pickable. There’s no need to migrate all content with OS Migrate. Only migrate some tenants, or further scope to some of their resource types, or further limit the resource type exports/imports by a list of resource names or regular expression patterns. Use as much or as little of OS Migrate as you need.

  • Implemented as an Ansible collection. When learning to work with OS Migrate, most importantly you’ll be learning to work with Ansible, an automation tool used across the IT industry. If you already know Ansible, you’ll feel right at home with OS Migrate.

The How

If you want to use OS Migrate, the best thing I can do here is point towards the OS Migrate User Documentation. If you just want to get a glimpse for now, read on.

As OS Migrate is an Ansible collection, the main mode of use is setting Ansible variables and running playbooks shipped with the collection.

Should the default playbooks not fit a particular use case, a technically savvy user could also utilize the collection’s roles and modules as building blocks to craft their own playbooks. However, as i’ve wrote above in the point about cherry-picking features, we’ve tried to make the default playbooks quite generically usable.

In OS Migrate we differentiate between two main migration types with respect to what resources we are migrating: pre-workload migration, and workload migration.

Pre-workload migration

Pre-workload migration focuses on content/resources that can be copied to the destination cloud without affecting workloads in the source cloud. It can be typically done with little timing pressure, ahead of time before migrating workloads. This includes resources like tenant networks, subnets, routers, images, security groups etc.

The content is serialized as editable YAML files to the Migrator host (the machine running the Ansible playbooks), and then resources are created in the destination according to the YAML serializations.

Pre-workload migration data flow

Workload migration

Workload migration focuses on copying VMs and their attached Cinder volumes, and on creating floating IPs for VMs in the destination cloud. The VM migration between clouds is a “cold” migration. VMs first need to be stopped and then they are copied.

With regards to the boot disk of the VM, we support two options: either the destination VM’s boot disk is created from a Glance image, or the source VM’s boot disk snapshot is copied into the destination cloud as a Cinder volume and the destination VM is created as boot-from-volume. There is a migration parameter controlling this behavior on a per-VM basis. Additional Cinder volumes attached to the source VM are copied.

The data path for VMs and volumes is slightly different than in the pre-workload migration. Only metadata gets exported onto the Migrator host. For moving the binary data, special VMs called conversion hosts are deployed, one in the source and one in the destination. This is done for performance reasons, to allow the VMs’ and volumes’ binary data to travel directly from cloud to cloud without going through the (perhaps external) Migrator host as an intermediary.

Workload migration data flow

The Pointers

Now that we have an overview of OS Migrate, let’s finish with some links where more info can be found:

Have a good day!

by Jiří Stránský at July 12, 2021 12:00 AM

May 12, 2021

RDO Blog

RDO Wallaby Released

RDO Wallaby Released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Wallaby for RPM-based distributions, CentOS Stream and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Wallaby is the 23rd release from the OpenStack project, which is the work of more than 1,000 contributors from around the world.
The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/8-stream/cloud/x86_64/openstack-wallaby/.
The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Stream and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS users looking to build and maintain their own on-premise, public or hybrid clouds.
All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.
PLEASE NOTE: RDO Wallaby provides packages for CentOS Stream 8 and Python 3 only. Please use the Victoria release for CentOS8.  For CentOS7 and python 2.7, please use the Train release.
Interesting things in the Wallaby release include:
  • With the Victoria release, source tarballs are validated using the upstream GPG signature. This certifies that the source is identical to what is released upstream and ensures the integrity of the packaged source code.
  • With the Victoria release, openvswitch/ovn are not shipped as part of RDO. Instead RDO relies on builds from the CentOS NFV SIG.
  • Some new packages have been added to RDO during the Victoria release:
    • RBAC supported added in multiple projects including Designate, Glance, Horizon, Ironic, and Octavia
    • Glance added support for distributed image import
    • Ironic added deployment and cleaning enhancements including UEFI Partition Image handling, NVMe Secure Erase, per-instance deployment driver interface overrides, deploy time “deploy_steps”, and file injection.
    • Kuryr added nested mode with node VMs running in multiple subnets is now available. To use that functionality a new option [pod_vif_nested]worker_nodes_subnets is introduced accepting multiple Subnet IDs.
    • Manila added the ability for Operators to now set maximum and minimum share sizes as extra specifications on share types.
    • Neutron added a new subnet type network:routed is now available. IPs on this subnet type can be advertised with BGP over a provider network.
    • TripleO moved network and network port creation out of the Heat stack and into the baremetal provisioning workflow.

Other highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/wallaby/highlights.html

Contributors

During the Wallaby cycle, we saw the following new RDO contributors:

  • Adriano Petrich
  • Ananya Banerjee
  • Artom Lifshitz
  • Attila Fazekas
  • Brian Haley
  • David J Peacock
  • Jason Joyce
  • Jeremy Freudberg
  • Jiri Podivin
  • Martin Kopec
  • Waleed Mousa
Welcome to all of you and Thank You So Much for participating!
But we wouldn’t want to overlook anyone. A super massive Thank You to all 58 contributors who participated in producing this release. This list includes commits to rdo-packages, rdo-infra, and redhat-website repositories:
  • Adriano Petrich
  • Alex Schultz
  • Alfredo Moralejo
  • Amol Kahat
  • Amy Marrich
  • Ananya Banerjee
  • Artom Lifshitz
  • Arx Cruz
  • Attila Fazekas
  • Bhagyashri Shewale
  • Brian Haley
  • Cédric Jeanneret
  • Chandan Kumar
  • Daniel Pawlik
  • David J Peacock
  • Dmitry Tantsur
  • Emilien Macchi
  • Eric Harney
  • Fabien Boucher
  • Gabriele Cerami
  • Gael Chamoulaud
  • Grzegorz Grasza
  • Harald Jensas
  • Jason Joyce
  • Javier Pena
  • Jeremy Freudberg
  • Jiri Podivin
  • Joel Capitao
  • Kevin Carter
  • Luigi Toscano
  • Marc Dequenes
  • Marios Andreou
  • Martin Kopec
  • Mathieu Bultel
  • Matthias Runge
  • Mike Turek
  • Nicolas Hicher
  • Pete Zaitcev
  • Pooja Jadhav
  • Rabi Mishra
  • Riccardo Pittau
  • Roman Gorshunov
  • Ronelle Landy
  • Sagi Shnaidman
  • Sandeep Yadav
  • Slawek Kaplonski
  • Sorin Sbarnea
  • Steve Baker
  • Takashi Kajinami
  • Tristan Cacqueray
  • Waleed Mousa
  • Wes Hayutin
  • Yatin Karel

The Next Release Cycle

At the end of one release, focus shifts immediately to the next release i.e Xena.

Get Started

There are three ways to get started with RDO.

To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.

For a production deployment of RDO, use TripleO and you’ll be running a production cloud in short order.

Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.

Get Help

The RDO Project has our users@lists.rdoproject.org for RDO-specific users and operators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org.

The #rdo channel on Freenode IRC is also an excellent place to find and give help.

We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues.

Get Involved

To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.

Join us in #rdo and #tripleo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube.

 

by Amy Marrich at May 12, 2021 07:41 PM

April 17, 2021

Lars Kellogg-Stedman

Creating a VXLAN overlay network with Open vSwitch

In this post, we’ll walk through the process of getting virtual machines on two different hosts to communicate over an overlay network created using the support for VXLAN in Open vSwitch (or OVS). The test environment For this post, I’ll be working with two systems: node0.ovs.virt at address 192.168.122.107 node1.ovs.virt at address 192.168.122.174 These hosts are running CentOS 8, although once we get past the package installs the instructions will be similar for other distributions.

April 17, 2021 12:00 AM

March 22, 2021

Matthias Runge

High memory usage with collectd

collectd itself is intended as lightweight collecting agent for metrics and events. In larger infrastructure, the data is sent over the network to a central point, where data is stored and processed further.

This introduces a potential issue: what happens, if the remote endpoint to write data to is not available. The traditional network plugin uses UDP, which is by definition unreliable.

Collectd has a queue of values to be written to an output plugin, such was write_http or amqp1. At the time, when metrics should be written, collectd iterates on that queue and tries to write this data to the endpoint. If writing was successful, the data is removed from the queue. The little word if also hints, there is a chance that data doesn't get removed. The question is: what happens, or what should be done?

There is no easy answer to this. Some people tend to ignore missed metrics, some don't. The way to address this is to cap the queue at a given length and to remove oldest data when new comes in. The parameters are WriteQueueLimitHigh and WriteQueueLimitLow. If they are unset, the queue is not limited and will grow until memory is out. For predictability reasons, you should set these two values to the same number. To get the right value for this parameter, it would require a bit of experimentation. If values are dropped, one would see that in the log file.

When collectd is configured as part of Red Hat OpenStack Platform, the following config snippet can be used:

parameter_defaults:
    ExtraConfig:
      collectd::write_queue_limit_high: 100
      collectd::write_queue_limit_low: 100

Another parameter can be used to limit explicitly the queue length in case the amqp1 plugin is used for sending out data: the SendQueueLimit parameter, which is used for the same purpose, but can differ from the global WriteQueueLimitHigh and WriteQueueLimitLow.

parameter_defaults:
    ExtraConfig:
        collectd::plugin::amqp1::send_queue_limit: 50

In almost all cases, the issue of collectd using much memory could be tracked down to a write endpoint not being available, dropping data occasionally, etc.

by mrunge at March 22, 2021 03:00 PM

March 09, 2021

Lars Kellogg-Stedman

Getting started with KSOPS

Kustomize is a tool for assembling Kubernetes manifests from a collection of files. We’re making extensive use of Kustomize in the operate-first project. In order to keep secrets stored in our configuration repositories, we’re using the KSOPS plugin, which enables Kustomize to use sops to encrypt/files using GPG. In this post, I’d like to walk through the steps necessary to get everything up and running. Set up GPG We encrypt files using GPG, so the first step is making sure that you have a GPG keypair and that your public key is published where other people can find it.

March 09, 2021 12:00 AM

February 27, 2021

Lars Kellogg-Stedman

Tools for writing about Git

I sometimes find myself writing articles or documentation about git, so I put together a couple of terrible hacks for generating reproducible histories and pretty graphs of those histories. git synth The git synth command reads a YAML description of a repository and executes the necessary commands to reproduce that history. It allows you set the name and email address of the author and committer as well as static date, so you every time you generate the repository you can identical commit ids.

February 27, 2021 12:00 AM

February 24, 2021

Lars Kellogg-Stedman

File reorganization

This is just a note that I’ve substantially changed how the post sources are organized. I’ve tried to ensure that I preserve all the existing links, but if you spot something missing please feel free to leave a comment on this post.

February 24, 2021 12:00 AM

File reorganization

This is just a note that I’ve substantially changed how the post sources are organized. I’ve tried to ensure that I preserve all the existing links, but if you spot something missing please feel free to leave a comment on this post.

February 24, 2021 12:00 AM

February 18, 2021

Lars Kellogg-Stedman

Editing a commit message without git rebase

While working on a pull request I will make liberal use of git rebase to clean up a series of commits: squashing typos, re-ordering changes for logical clarity, and so forth. But there are some times when all I want to do is change a commit message somewhere down the stack, and I was wondering if I had any options for doing that without reaching for git rebase. It turns out the answer is “yes”, as long as you have a linear history.

February 18, 2021 12:00 AM

February 10, 2021

Lars Kellogg-Stedman

Object storage with OpenShift Container Storage

OpenShift Container Storage (OCS) from Red Hat deploys Ceph in your OpenShift cluster (or allows you to integrate with an external Ceph cluster). In addition to the file- and block- based volume services provided by Ceph, OCS includes two S3-api compatible object storage implementations. The first option is the Ceph Object Gateway (radosgw), Ceph’s native object storage interface. The second option called the “Multicloud Object Gateway”, which is in fact a piece of software named Noobaa, a storage abstraction layer that was acquired by Red Hat in 2018.

February 10, 2021 12:00 AM

February 08, 2021

Lars Kellogg-Stedman

Remediating poor PyPi performance with DevPi

Performance of the primary PyPi service has been so bad lately that it’s become very disruptive. Tasks that used to take a few seconds will now churn along for 15-20 minutes or longer before completing, which is incredibly frustrating. I first went looking to see if there was a PyPi mirror infrastructure, like we see with CPAN for Perl or CTAN for Tex (and similarly for most Linux distributions). There is apparently no such beast,

February 08, 2021 12:00 AM

February 06, 2021

Lars Kellogg-Stedman

symtool: a tool for interacting with your SYM-1

The SYM-1 is a 6502-based single-board computer produced by Synertek Systems Corp in the mid 1970’s. I’ve had one floating around in a box for many, many years, and after a recent foray into the world of 6502 assembly language programming I decided to pull it out, dust it off, and see if it still works. The board I have has a whopping 8KB of memory, and in addition to the standard SUPERMON monitor it has the expansion ROMs for the Synertek BASIC interpreter (yet another Microsoft BASIC) and RAE (the “Resident Assembler Editor”).

February 06, 2021 12:00 AM

January 14, 2021

RDO Blog

RDO plans to move to CentOS Stream

What changed with CentOS?
CentOS announced recently that they will focus on CentOS Stream and CentOS Linux 8 will be EOL at the end of 2021.

While CentOS Linux 8 (C8) is a pure rebuild of Red Hat Enterprise Linux (RHEL), CentOS Stream 8 (C8S) tracks just ahead of the current RHEL release. This means that we will have a continuous flow of new packages available before they are included in the next RHEL minor release.

What’s the current situation in RDO?
RDO has been using the latest CentOS Linux 8 to build both the OpenStack packages and the required dependencies since the Train release for both for the official CloudSIG repos and the RDO Trunk (aka DLRN) repos.

In the last
few months, we have been running periodic CI jobs to validate RDO Trunk repos built on CentOS Linux 8 along with CentOS Stream 8 to find any potential issues created by OS package updates before they are shipped in CentOS Linux 8. As expected, during these tests we have not found any issue related to the buildroot environment, packages can be used for both C8 and C8S. We did find a few issues related to package updates which allowed us to propose the required fixes upstream.

What’s our plan for RDO roadmap?
  • RDO Wallaby (ETA is end of April 2021) will be built, tested and released only on CentOS 8 Stream.
  • RDO CloudSIG repos for Victoria and Ussuri will be updated and tested for both CentOS Stream and CentOS Linux 8 until end of 2021 and then continue on CentOS Stream.
  • We will create and test new RDO CloudSIG repos for Victoria and Ussuri on CentOS Stream 8.
  • The RDO Trunk repositories (aka DLRN repos) will be built and tested using CentOS 8 Stream buildroot for all releases currently using CentOS Linux 8 (since Train on)

How do we plan to implement these changes?
Some implementation details that may be of interest:
  • We will keep building packages just once. We will move buildroots for both DLRN and CloudSIG to use CentOS Stream 8 in the near future.
  • For Ussuri and Victoria CloudSIG repos, while we are supporting both C8 and C8S, we will be utilizing separated CBS Tags. This will allow us to have separated repositories, promotion jobs and package versions for each OS.
  • In order to reduce the impact of potential issues and discover issues related to C8S as soon as possible, we will put more focus on periodic jobs on C8S.
  • At a later stage, we will move the CI jobs used to gate changes in distgits to use C8S instead of C8 for all RDO releases where we use CentOS 8.
  • The CentOS/RHEL team has made public their interest in applying Continuous Delivery approach to CentOS Stream to provide a stable CentOS Stream using gating integration jobs. Our intent is to collaborate with the CentOS team on any initiatives that will help to validate RDO as early in the delivery pipeline as possible and reduce the impact on potential issues.

What’s next?
We plan to start the activities needed to carry out this plan in the next weeks.

We will continue discussing and sharing the progress
during the RDO weekly meetings, feel free to join us if you are interested.

Also, If you have any question or suggestion related to these changes, don’t hesitate to contact us
in the #rdo freenode channel or using the RDO mailing lists.

by amoralej at January 14, 2021 03:17 PM

January 04, 2021

Matthias Runge

Kubernetes on Raspberry Pi 4 on Fedora

Recently, I bought a couple of Raspberry Pi 4, one with 4 GB and 2 equipped with 8 GB of RAM. When I bought the first one, there was no option to get bigger memory. However, I saw this as a game and thought to give this a try. I also bought SSDs for these and USB3 to SATA adapters. Before purchasing anything, you may want to take a look at James Archers page. Unfortunately, there are a couple on adapters on the marked, which don't work that well.

Deploying Fedora 33 Server

Initially, I followed the description to deploy Fedora 32; it works the same way for Fedora 33 Server (in my case here).

Because ceph requires a partition (or better: a whole disk), I used the traditional setup using partitions and no LVM.

Deploying Kubernetes

git clone https://github.com/kubernetes-sigs/kubespray
cd kubespray

I followed the documentation and created an inventory. For the container runtime, I picked crio, and as calico as network plugin.

Because of an issue, I had to patch roles/download/defaults/main.yml:

diff --git a/roles/download/defaults/main.yml b/roles/download/defaults/main.yml
index a97be5a6..d4abb341 100644
--- a/roles/download/defaults/main.yml
+++ b/roles/download/defaults/main.yml
@@ -64,7 +64,7 @@ quay_image_repo: "quay.io"

 # TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults
 # after migration to container download
-calico_version: "v3.16.5"
+calico_version: "v3.15.2"
 calico_ctl_version: "{{ calico_version }}"
 calico_cni_version: "{{ calico_version }}"
 calico_policy_version: "{{ calico_version }}"
@@ -520,13 +520,13 @@ etcd_image_tag: "{{ etcd_version }}{%- if image_arch != 'amd64' -%}-{{ image_arc
 flannel_image_repo: "{{ quay_image_repo }}/coreos/flannel"
 flannel_image_tag: "{{ flannel_version }}"
 calico_node_image_repo: "{{ quay_image_repo }}/calico/node"
-calico_node_image_tag: "{{ calico_version }}"
+calico_node_image_tag: "{{ calico_version }}-arm64"
 calico_cni_image_repo: "{{ quay_image_repo }}/calico/cni"
-calico_cni_image_tag: "{{ calico_cni_version }}"
+calico_cni_image_tag: "{{ calico_cni_version }}-arm64"
 calico_policy_image_repo: "{{ quay_image_repo }}/calico/kube-controllers"
-calico_policy_image_tag: "{{ calico_policy_version }}"
+calico_policy_image_tag: "{{ calico_policy_version }}-arm64"
 calico_typha_image_repo: "{{ quay_image_repo }}/calico/typha"
-calico_typha_image_tag: "{{ calico_typha_version }}"
+calico_typha_image_tag: "{{ calico_typha_version }}-arm64"
 pod_infra_image_repo: "{{ kube_image_repo }}/pause"
 pod_infra_image_tag: "{{ pod_infra_version }}"
 install_socat_image_repo: "{{ docker_image_repo }}/xueshanf/install-socat"

Deploy Ceph

Ceph requires a raw partition. Make sure, you have an empty partition available.

[root@node1 ~]# lsblk -f
NAME FSTYPE FSVER LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINT
sda
├─sda1
│    vfat   FAT32 UEFI  7DC7-A592
├─sda2
│    vfat   FAT32       CB75-24A9                               567.9M     1% /boot/efi
├─sda3
│    xfs                cab851cb-1910-453b-ae98-f6a2abc7f0e0    804.7M    23% /boot
├─sda4
│
├─sda5
│    xfs                6618a668-f165-48cc-9441-98f4e2cc0340     27.6G    45% /
└─sda6

In my case, there are sda4 and sda6 not formatted. sda4 is very small and will be ignored, sda6 will be used.

Using rook is pretty straightforward

git clone --single-branch --branch v1.5.4 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/ceph
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml

by mrunge at January 04, 2021 10:00 AM

December 18, 2020

Lars Kellogg-Stedman

To sleep or not to sleep?

Let’s say you have a couple of sensors attached to an ESP8266 running MicroPython. You’d like to sample them at different frequencies (say, one every 60 seconds and one every five minutes), and you’d like to do it as efficiently as possible in terms of power consumption. What are your options? If we don’t care about power efficiency, the simplest solution is probably a loop like this: import machine lastrun_1 = 0 lastrun_2 = 0 while True: now = time.

December 18, 2020 12:00 AM

December 16, 2020

Adam Young

Moving things around in OpenStack

While reviewing  the comments on the Ironic spec, for Secure RBAC. I had to ask myself if the “project” construct makes sense for Ironic.  I still think it does, but I’ll write this down to see if I can clarify it for me, and maybe for you, too.

Baremetal servers change.  The whole point of Ironic is to control the change of Baremetal servers from inanimate pieces of metal to “really useful engines.”  This needs to happen in a controlled and unsurprising way.

Ironic the server does what it is told. If a new piece of metal starts sending out DHCP requests, Ironic is going to PXE boot it.  This is the start of this new piece of metals journey of self discovery.  At least as far as Ironic is concerned.

But really, someone had to rack and wire said piece of metal.  Likely the person that did this is not the person that is going to run workloads on it in the end.  They might not even work for the same company;  they might be a delivery person from Dell or Supermicro.  So, once they are done with it, they don’t own it any more.

Who does?  Who owns a piece of metal before it is enrolled in the OpenStack baremetal service?

No one.  It does not exist.

Ok, so lets go back to someone pushing the button, booting our server for the first time, and it doing its PXE boot thing.

Or, we get the MAC address and enter that into the ironic database, so that when it does boot, we know about it.

Either way, Ironic is really the playground monitor, just making sure it plays nice.

What if Ironic is a multi-tenant system?  Someone needs to be able to transfer the baremetal server from where ever it lands up front to the people that need to use it.

I suspect that ransferring metal from project to project is going to be one of the main use cases after the sun has set on day one.

So, who should be allowed to say what project a piece of baremetal can go to?

Well, in Keystone, we have the idea of hierarchy.  A Project is owned by a domain, and a project can be nested inside another project.

But this information is not passed down to Ironic.  There is no way to get a token for a project that shows its parent information.  But a remote service could query the project hierarchy from Keystone. 

https://docs.openstack.org/api-ref/identity/v3/?expanded=show-project-details-detail#show-project-details

Say I want to transfer a piece of metal from one project to another.  Should I have a token for the source project or the remote project.  Ok, dump question, I should definitely have a token for the source project.  The smart question is whether I should also have a token for the destination project.

Sure, why not.  Two tokens. One has the “delete” role and one that has the “create” role.

The only problem is that nothing like this exists in Open Stack.  But it should.

We could fake it with hierarchy;  I can pass things up and down the project tree.  But that really does not one bit of good.  People don’t really use the tree like that.  They should.  We built a perfectly nice tree and they ignore it.  Poor, ignored, sad, lonely tree.

Actually, it has no feelings.  Please stop anthropomorphising the tree.

What you could do is create the destination object, kind of a potential piece-of-metal or metal-receiver.  This receiver object gets  a UUID.  You pass this UUID to the “move” API. But you call the MOVE api with a token for the source project.   The move is done atomically. Lets call this thing identified by a UUID a move-request. 

The order of operations could be done in reverse.  The operator could create the move request on the source, and then pass that to the receiver.  This might actually make mores sense, as you need to know about the object before you can even think to move it.

Both workflows seem to have merit.

And…this concept seems to be something that OpenStack needs in general. 

Infact, why should the API not be a generic API. I mean, it would have to be per service, but the same API could be used to transfer VMs between projects in Nova nad between Volumes in Cinder. The API would have two verbs one for creating a new move request, and one for accepting it.

POST /thingy/v3.14/resource?resource_id=abcd&destination=project_id

If this is called with a token, it needs to be scoped. If it is scoped to the project_id in the API, it creates a receiving type request. If it is scoped to the project_id that owns the resource, it is a sending type request. Either way, it returns an URL. Call GET on that URL and you get information about the transfer. Call PATCH on it with the appropriately scoped token, and the resource is transferred. And maybe enough information to prove that you know what you are doing: maybe you have to specify the source and target projects in that patch request.

A foolish consistency is the hobgoblin of little minds.

Edit: OK, this is not a new idea. Cinder went through the same thought process according to Duncan Thomas. The result is this API: https://docs.openstack.org/api-ref/block-storage/v3/index.html#volume-transfer

Which looks like it then morphed to this one:

https://docs.openstack.org/api-ref/block-storage/v3/index.html#volume-transfers-volume-transfers-3-55-or-later


by Adam Young at December 16, 2020 12:49 AM

December 15, 2020

John Likes OpenStack

Running tripleo-ansible molecule locally for dummies

I've had to re-teach myself how to do this so I'm writing my own notes.

Prerequisites:

  1. Get a working undercloud (perhaps from tripleo-lab)
  2. git clone https://git.openstack.org/openstack/tripleo-ansible.git ; cd tripleo-ansible
  3. Determine the test name: ls roles

Once you have your environment ready run a test with the name from step 3.


./scripts/run-local-test tripleo_derived_parameters
Some tests in CI are configured to use `--skip-tags`. You can do this for your local tests too by setting the appropriate environment variables. For example:

export TRIPLEO_JOB_ANSIBLE_ARGS="--skip-tags run_ceph_ansible,run_uuid_ansible,ceph_client_rsync,clean_fetch_dir"
./scripts/run-local-test tripleo_ceph_run_ansible

This last tip should get added to the docs.

by Unknown (noreply@blogger.com) at December 15, 2020 03:46 PM

December 13, 2020

Lars Kellogg-Stedman

Animating a map of Covid in the Northeast US

I recently put together a short animation showing the spread of Covid throughout the Northeast United States: I thought it might be interesting to walk through the process I used to create the video. The steps described in this article aren’t exactly what I used (I was dealing with data in a PostGIS database, and in the interests of simplicity I wanted instructions that can be accomplished with just QGIS), but they end up in the same place.

December 13, 2020 12:00 AM

December 12, 2020

Lars Kellogg-Stedman

Postgres and permissions in OpenShift

Folks running the official postgres image in OpenShift will often encounter a problem when first trying to boot a Postgres container in OpenShift. Given a pod description something like this: apiVersion: v1 kind: Pod metadata: name: postgres spec: containers: - name: postgres image: postgres:13 ports: - containerPort: 5432 volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres-data envFrom: - secretRef: name: postgres-secret volumes: - name: postgres-data persistentVolumeClaim: claimName: postgres-data-pvc The container will fail to start and the logs will show the following error:

December 12, 2020 12:00 AM

November 18, 2020

Adam Young

Keystone and Cassandra: Parity with SQL

Look back at our Pushing Keystone over the Edge presentation from the OpenStack Summit. Many of the points we make are problems faced by any application trying to scale across multiple datacenters. Cassandra is a database designed to deal with this level of scale. So Cassandra may well be a better choice than MySQL or other RDBMS as a datastore to Keystone. What would it take to enable Cassandra support for Keystone?

Lets start with the easy part: defining the tables. Lets look at how we define the Federation back end for SQL. We use SQL Alchemy to handle the migrations: we will need something comparable for Cassandra Query Language (CQL) but we also need to translate the table definitions themselves.

Before we create the tables, we need to create keyspace. I am going to make separate keyspaces for each of the subsystems in Keystone: Identity, Assignment, Federation, and so on. Here’s the Federated one:

CREATE KEYSPACE keystone_federation WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': '3'}  AND durable_writes = true;

The Identity provider table is defined like this:

    idp_table = sql.Table(
        'identity_provider',
        meta,
        sql.Column('id', sql.String(64), primary_key=True),
        sql.Column('enabled', sql.Boolean, nullable=False),
        sql.Column('description', sql.Text(), nullable=True),
        mysql_engine='InnoDB',
        mysql_charset='utf8')
    idp_table.create(migrate_engine, checkfirst=True)

The comparable CQL to create a table would look like this:

CREATE TABLE identity_provider (id text PRIMARY KEY , enables boolean , description text);

However, when I describe the schema to view the table defintion, we see that there are many tuning and configuration parameters that are defaulted:

CREATE TABLE federation.identity_provider (
    id text PRIMARY KEY,
    description text,
    enables boolean
) WITH additional_write_policy = '99p'
    AND bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND cdc = false
    AND comment = ''
    AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
    AND compression = {'chunk_length_in_kb': '16', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND default_time_to_live = 0
    AND extensions = {}
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair = 'BLOCKING'
    AND speculative_retry = '99p';

I don’t know Cassandra well enough to say if these are sane defaults to have in production. I do know that someone, somewhere, is going to want to tweak them, and we are going to have to provide a means to do so without battling the upgrade scripts. I suspect we are going to want to only use the short form (what I typed into the CQL prompt) in the migrations, not the form with all of the options. In addition, we might want an if not exists  clause on the table creation to allow people to make these changes themselves. Then again, that might make things get out of sync. Hmmm.

There are three more entities in this back end:

CREATE TABLE federation_protocol (id text, idp_id text, mapping_id text,  PRIMARY KEY(id, idp_id) );
cqlsh:federation> CREATE TABLE mapping (id text primary key, rules text,    );
CREATE TABLE service_provider ( auth_url text, id text primary key, enabled boolean, description text, sp_url text, RELAY_STATE_PREFIX  text);

One thing that is interesting is that we will not be limiting the ID fields to 32, 64, or 128 characters. There is no performance benefit to doing so in Cassandra, nor is there any way to enforce the length limits. From a Keystone perspective, there is not much value either; we still need to validate the UUIDs in Python code. We could autogenerate the UUIDs in Cassandra, and there might be some benefit to that, but it would diverge from the logic in the Keystone code, and explode the test matrix.

There is only one foreign key in the SQL section; the federation protocol has an idp_id that points to the identity provider table. We’ll have to accept this limitation and ensure the integrity is maintained in code. We can do this by looking up the Identity provider before inserting the protocol entry. Since creating a Federated entity is a rare and administrative task, the risk here is vanishingly small. It will be more significant elsewhere.

For access to the database, we should probably use Flask-CQLAlchemy. Fortunately, Keystone is already a Flask based project, so this makes the two projects align.

For migration support, It looks like the best option out there is cassandra-migrate.

An effort like this would best be started out of tree, with an expectation that it would be merged in once it had shown a degree of maturity. Thus, I would put it into a namespace that would not conflict with the existing keystone project. The python imports would look like:

from keystone.cassandra import migrations
from keystone.cassandra import identity
from keystone.cassandra import federation

This could go in its own git repo and be separately pip installed for development. The entrypoints would be registered such that the configuration file would have entries like:

[application_credential] driver = cassandra

Any tuning of the database could be put under a [cassandra] section of the conf file, or tuning for individual sections could be in keys prefixed with cassanda_ in the appropriate sections, such as application_credentials as shown above.

It might be interesting to implement a Cassandra token backend and use the default_time_to_live value on the table to control the lifespan and automate the cleanup of the tables. This might provide some performance benefit over the fernet approach, as the token data would be cached. However, the drawbacks due to token invalidation upon change of data would far outweigh the benefits unless the TTL was very short, perhaps 5 minutes.

Just making it work is one thing. In a follow on article, I’d like to go through what it would take to stretch a cluster from one datacenter to another, and to make sure that the other considerations that we discussed in that presentation are covered.

Feedback?

by Adam Young at November 18, 2020 09:41 PM

November 16, 2020

RDO Blog

RDO Victoria Released

RDO Victoria Released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Victoria for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Victoria is the 22nd release from the OpenStack project, which is the work of more than 1,000 contributors from around the world.

The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/8/cloud/x86_64/openstack-victoria/.

The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.

All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.

PLEASE NOTE: RDO Victoria provides packages for CentOS8 and python 3 only. Please use the Train release, for CentOS7 and python 2.7.

Interesting things in the Victoria release include:

  • With the Victoria release, source tarballs are validated using the upstream GPG signature. This certifies that the source is identical to what is released upstream and ensures the integrity of the packaged source code.
  • With the Victoria release, openvswitch/ovn are not shipped as part of RDO. Instead RDO relies on builds from the CentOS NFV SIG.
  • Some new packages have been added to RDO during the Victoria release:
    • ansible-collections-openstack: This package includes OpenStack modules and plugins which are supported by the OpenStack community to help with the management of OpenStack infrastructure.
    • ansible-tripleo-ipa-server: This package contains Ansible for configuring the FreeIPA server for TripleO.
    • python-ibmcclient: This package contains the python library to communicate with HUAWEI iBMC based systems.
    • puppet-powerflex: This package contains the puppet module needed to deploy PowerFlex with TripleO.
    • The following packages have been retired from the RDO OpenStack distribution in the Victoria release:
      • The Congress project, an open policy framework for the cloud, has been retired upstream and from the RDO project in the Victoria release.
      • neutron-fwaas, the Firewall as a Service driver for neutron, is no longer maintained and has been removed from RDO.

Other highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/victoria/highlights.

Contributors
During the Victoria cycle, we saw the following new RDO contributors:

Amy Marrich (spotz)
Daniel Pawlik
Douglas Mendizábal
Lance Bragstad
Martin Chacon Piza
Paul Leimer
Pooja Jadhav
Qianbiao NG
Rajini Karthik
Sandeep Yadav
Sergii Golovatiuk
Steve Baker

Welcome to all of you and Thank You So Much for participating!

But we wouldn’t want to overlook anyone. A super massive Thank You to all 58 contributors who participated in producing this release. This list includes commits to rdo-packages, rdo-infra, and redhat-website repositories:

Adam Kimball
Ade Lee
Alan Pevec
Alex Schultz
Alfredo Moralejo
Amol Kahat
Amy Marrich (spotz)
Arx Cruz
Bhagyashri Shewale
Bogdan Dobrelya
Cédric Jeanneret
Chandan Kumar
Damien Ciabrini
Daniel Pawlik
Dmitry Tantsur
Douglas Mendizábal
Emilien Macchi
Eric Harney
Francesco Pantano
Gabriele Cerami
Gael Chamoulaud
Gorka Eguileor
Grzegorz Grasza
Harald Jensås
Iury Gregory Melo Ferreira
Jakub Libosvar
Javier Pena
Joel Capitao
Jon Schlueter
Lance Bragstad
Lon Hohberger
Luigi Toscano
Marios Andreou
Martin Chacon Piza
Mathieu Bultel
Matthias Runge
Michele Baldessari
Mike Turek
Nicolas Hicher
Paul Leimer
Pooja Jadhav
Qianbiao.NG
Rabi Mishra
Rafael Folco
Rain Leander
Rajini Karthik
Riccardo Pittau
Ronelle Landy
Sagi Shnaidman
Sandeep Yadav
Sergii Golovatiuk
Slawek Kaplonski
Soniya Vyas
Sorin Sbarnea
Steve Baker
Tobias Urdin
Wes Hayutin
Yatin Karel

The Next Release Cycle
At the end of one release, focus shifts immediately to the next release i.e Wallaby.

Get Started
There are three ways to get started with RDO.

To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.

For a production deployment of RDO, use TripleO and you’ll be running a production cloud in short order.

Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.

Get Help
The RDO Project has our users@lists.rdoproject.org for RDO-specific users and operators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org.

The #rdo channel on Freenode IRC is also an excellent place to find and give help.

We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues.

Get Involved
To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.

Join us in #rdo and #tripleo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube.

by Amy Marrich at November 16, 2020 02:27 PM

October 22, 2020

Adam Young

Adding Nodes to Ironic

TheJulia was kind enough to update the docs for Ironic to show me how to include IPMI information when creating nodes.

To all delete the old nodes

for UUID in `openstack baremetal node list -f json | jq -r '.[] | .UUID' ` ; do openstack baremetal node delete $UUID; done

nodes definition

I removed the ipmi common data from each definition as there is a password there, and I will set that afterwards on all nodes.

{
  "nodes": [
    {
      "ports": [
        {
          "address": "00:21:9b:93:d0:90"
        }
      ],
      "name": "zygarde",
      "driver": "ipmi",
      "driver_info": {
      		"ipmi_address":  "192.168.123.10"
      }
    },
    {
      "ports": [
        {
          "address": "00:21:9b:9b:c4:21"
        }
      ],
      "name": "umbreon",
      "driver": "ipmi",
      "driver_info": {
	      "ipmi_address": "192.168.123.11"
	}
      },	
    {
      "ports": [
        {
          "address": "00:21:9b:98:a3:1f"
        }
      ],
      "name": "zubat",
      "driver": "ipmi",
       "driver_info": {
	      "ipmi_address": "192.168.123.12"
       }
    }
  ]
}

Create the nodes

openstack baremetal create  ./nodes.ipmi.json 

Check that the nodes are present

$ openstack baremetal node list
+--------------------------------------+---------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name    | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+---------+---------------+-------------+--------------------+-------------+
| 3fa4feae-0d5c-4e38-a012-29258d40651b | zygarde | None          | None        | enroll             | False       |
| 00965ad4-c972-46fa-948a-3ce87aecf5ac | umbreon | None          | None        | enroll             | False       |
| 8702ea0c-aa10-4542-9292-3b464fe72036 | zubat   | None          | None        | enroll             | False       |
+--------------------------------------+---------+---------------+-------------+--------------------+-------------+

Update IPMI common data

for UUID in `openstack baremetal node list -f json | jq -r '.[] | .UUID' ` ; 
do  openstack baremetal node set $UUID --driver-info ipmi_password=`cat ~/ipmi.password`  --driver-info   ipmi_username=admin   ; 
done

EDIT: I had ipmi_user before and it does not work. Needs to be ipmi_username.

Final Check

And if I look in the returned data for the definition, we see the password is not readable:

$ openstack baremetal node show zubat  -f yaml | grep ipmi_password
  ipmi_password: '******'

Power On

for UUID in `openstack baremetal node list -f json | jq -r '.[] | .UUID' ` ; do  openstack baremetal node power on $UUID  ; done

Change “on” to “off” to power off.

by Adam Young at October 22, 2020 03:14 AM

October 15, 2020

Adam Young

Introduction to Ironic

“I can do any thing. I can’t do everything.”

The sheer number of projects and problem domains covered by OpenStack was overwhelming. I never learned several of the other projects under the big tent. One project that is getting relevant to my day job is Ironic, the bare metal provisioning service. Here are my notes from spelunking the code.

The Setting

I want just Ironic. I don’t want Keystone (personal grudge) or Glance or Neutron or Nova.

Ironic will write files to e.g. /var/lib/tftp and /var/www/html/pxe and will not handle DHCP, but can make sue of static DHCP configurations.

Ironic is just an API server at this point ( python based web service) that manages the above files, and that can also talk to the IPMI ports on my servers to wake them up and perform configurations on them.

I need to provide ISO images to Ironic so it can put the in the right place to boot them

Developer steps

I checked the code out of git. I am working off the master branch.

I ran tox to ensure the unit tests are all at 100%

I have mysql already installed and running, but with a Keystone Database. I need to make a new one for ironic. The database name, user, and password are all going to be ironic, to keep things simple.

CREATE USER 'ironic'@'localhost' IDENTIFIED BY 'ironic';
create database ironic;
GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost';
FLUSH PRIVILEGES;

Note that I did this as the Keystone user. That dude has way to much privilege….good thing this is JUST for DEVELOPMENT. This will be used to follow the steps in the developers quickstart docs. I also set the mysql URL in the config file to this

connection = mysql+pymysql://ironic:ironic@localhost/ironic

Then I can run ironic db sync. Lets’ see what I got:

mysql ironic --user ironic --password
#....
MariaDB [ironic]> show tables;
+-------------------------------+
| Tables_in_ironic              |
+-------------------------------+
| alembic_version               |
| allocations                   |
| bios_settings                 |
| chassis                       |
| conductor_hardware_interfaces |
| conductors                    |
| deploy_template_steps         |
| deploy_templates              |
| node_tags                     |
| node_traits                   |
| nodes                         |
| portgroups                    |
| ports                         |
| volume_connectors             |
| volume_targets                |
+-------------------------------+
15 rows in set (0.000 sec)

OK, so the first table shows that Ironic uses Alembic to manage migrations. Unlike the SQLAlchemy migrations table, you can’t just query this table to see how many migrations have been performed:

MariaDB [ironic]> select * from alembic_version;
+--------------+
| version_num  |
+--------------+
| cf1a80fdb352 |
+--------------+
1 row in set (0.000 sec)

Running The Services

The script to start the API server is:
ironic-api -d --config-file etc/ironic/ironic.conf.local

Looking in the file requirements.txt, I see that they Web framework for Ironic is Pecan:

$ grep pecan requirements.txt 
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD

This is new to me. On Keystone, we converted from no framework to Flask. I’m guessing that if I look in the chain that starts with ironic-api file, I will see a Pecan launcher for a web application. We can find that file with

$which ironic-api
/opt/stack/ironic/.tox/py3/bin/ironic-api

Looking in that file, it references ironic.cmd.api, which is the file ironic/cmd/api.py which in turn refers to ironic/common/wsgi_service.py. This in turn refers to ironic/api/app.py from which we can finally see that it imports pecan.

Now I am ready to run the two services. Like most of OpenStack, there is an API server and a “worker” server. In Ironic, this is called the Conductor. This maps fairly well to the Operator pattern in Kubernetes. In this pattern, the user makes changes to the API server via a web VERB on a URL, possibly with a body. These changes represent a desired state. The state change is then performed asynchronously. In OpenStack, the asynchronous communication is performed via a message queue, usually Rabbit MQ. The Ironic team has a simpler mechanism used for development; JSON RPC. This happens to be the same mechanism used in FreeIPA.

Command Line

OK, once I got the service running, I had to do a little fiddling around to get the command lines to work. The was an old reference to

OS_AUTH_TYPE=token_endpoint

which needed to be replaces with

OS_AUTH_TYPE=none

Both are in the documentation, but only the second one will work.

I can run the following commands:

$ baremetal driver list
+---------------------+----------------+
| Supported driver(s) | Active host(s) |
+---------------------+----------------+
| fake-hardware       | ayoungP40      |
+---------------------+----------------+
$ baremetal node list


curl

Lets see if I can figure out from CURL what APIs those are…There is only one version, and one link, so:

curl http://127.0.0.1:6385 | jq '.versions  | .[] | .links | .[] |  .href'

"http://127.0.0.1:6385/v1/"


Doing curl against that second link gives a list of the top level resources:

  • media_types
  • chassis
  • nodes
  • drivers

And I assume that, if I use curl to GET the drivers, I should see the fake driver entry from above:

$ curl "http://127.0.0.1:6385/v1/drivers" | jq '.drivers |.[] |.name'

"fake-hardware"

OK, that is enough to get started. I am going to try and do the same with the RPMs that we ship with OSP and see what I get there.

But that is a tale for another day.

Thank You

I had a conversation I had with Julia Kreger, a long time core member of the Ironic project. This helped get me oriented.

by Adam Young at October 15, 2020 07:27 PM

October 05, 2020

Lars Kellogg-Stedman

A note about running gpgv

I found the following error from gpgv to be a little opaque: gpgv: unknown type of key resource 'trustedkeys.kbx' gpgv: keyblock resource '/home/lars/.gnupg/trustedkeys.kbx': General error gpgv: Can't check signature: No public key It turns out that’s gpg-speak for “your trustedkeys.kbx keyring doesn’t exist”. That took longer to figure out than I care to admit. To get a key from your regular public keyring into your trusted keyring, you can run something like the following:

October 05, 2020 12:00 AM

September 27, 2020

Lars Kellogg-Stedman

Installing metallb on OpenShift with Kustomize

Out of the box, OpenShift (4.x) on bare metal doesn’t come with any integrated load balancer support (when installed in a cloud environment, OpenShift typically makes use of the load balancing features available from the cloud provider). Fortunately, there are third party solutions available that are designed to work in bare metal environments. MetalLB is a popular choice, but requires some minor fiddling to get it to run properly on OpenShift.

September 27, 2020 12:00 AM

September 26, 2020

Lars Kellogg-Stedman

Vortex Core Keyboard Review

I’ve had my eye on the Vortex Core keyboard for a few months now, and this past week I finally broke down and bought one (with Cherry MX Brown switches). The Vortex Core is a 40% keyboard, which means it consists primarily of letter keys, a few lonely bits of punctuation, and several modifier keys to activate different layers on the keyboard. Physical impressions It’s a really cute keyboard. I’m a big fan of MX brown switches, and this keyboard is really a joy to type on, at least when you’re working primarily with the alpha keys.

September 26, 2020 12:00 AM

September 25, 2020

Lars Kellogg-Stedman

Building multi-architecture images with GitHub Actions

At work we have a cluster of IBM Power 9 systems running OpenShift. The problem with this environment is that nobody runs Power 9 on their desktop, and Docker Hub only offers automatic build support for the x86 architecture. This means there’s no convenient options for building Power 9 Docker images…or so I thought. It turns out that Docker provides GitHub actions that make the process of producing multi-architecture images quite simple.

September 25, 2020 12:00 AM

September 04, 2020

John Likes OpenStack

My tox cheat sheet

Install tox on centos8 undercloud deployed by tripleo-lab

curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python3 get-pip.py
pip install tox
Render changes to tripleo docs:

cd /home/stack/tripleo-docs
tox -e deploy-guide
Check syntax errors before wasting CI time

tox -e linters
tox -e pep8
Run a specific unit test

cd /home/stack/tripleo-common
tox -e py36 -- tripleo_common.tests.test_inventory.TestInventory.test_get_roles_by_service

cd /home/stack/tripleo-ansible
tox -e py36 -- tripleo_ansible.tests.modules.test_derive_hci_parameters.TestTripleoDeriveHciParameters

by Unknown (noreply@blogger.com) at September 04, 2020 06:31 PM

August 18, 2020

Groningen Rain

Tomorrow! Morning! OpenStack! On! Packet!

[Dutch Lock Down Day One Hundred Fifty Five] Tomorrow! Morning! At 0900 GMT! On Twitch and YouTube and Twitter!! !

by K Rain at August 18, 2020 08:03 PM

August 10, 2020

Lars Kellogg-Stedman

OpenShift and CNV: MAC address management in CNV 2.4

This is part of a series of posts about my experience working with OpenShift and CNV. In this post, I’ll look at how the recently released CNV 2.4 resolves some issues in managing virtual machines that are attached directly to local layer 2 networks In an earlier post, I discussed some issues around the management of virtual machine MAC addresses in CNV 2.3: in particular, that virtual machines are assigned a random MAC address not just at creation time but every time they boot.

August 10, 2020 12:00 AM

July 31, 2020

Groningen Rain

Dutch Lock Down Day One Hundred Thirty Eight

[And Then I Jumped Out Of The Airplane] Not literally. But OMG OMG OMG figuratively.

by K Rain at July 31, 2020 09:51 PM

July 30, 2020

Lars Kellogg-Stedman

OpenShift and CNV: Exposing virtualized services

This is the second in a series of posts about my experience working with OpenShift and CNV. In this post, I’ll be taking a look at how to expose services on a virtual machine once you’ve git it up and running. TL;DR Overview Connectivity options Direct attachment Using an OpenShift Service Exposing services on NodePorts Exposing services on cluster external IPso Exposing services using a LoadBalancer TL;DR Networking seems to be a weak area for CNV right now.

July 30, 2020 01:00 AM

OpenShift and CNV: Installer network requirements

This is the first in a series of posts about my experience working with OpenShift and CNV (“Container Native Virtualization”, a technology that allows you to use OpenShift to manage virtualized workloads in addition to the containerized workloads for which OpenShift is known). In this post, I’ll be taking a look at the installation experience, and in particular at how restrictions in our local environment interacted with the network requirements of the installer.

July 30, 2020 12:00 AM

July 28, 2020

Lars Kellogg-Stedman

You can't get an N95 mask: Now what?

[This is a guest post by my partner Alexandra van Geel.] TL;DR Hello everyone! The Basics: Masks vs. Respirators Question: What makes a good mask? Commercially available options The O2 Canada Curve Respirator The Vogmask valveless mask Some tips about comfort References Disclaimer: I am not an expert, just a private individual summarizing available information. Please correct me if I’ve gotten something wrong. TL;DR I suggest: (a) the Vogmask valveless mask or (b) the O2 Canada Curve Respirator.

July 28, 2020 12:00 AM

July 15, 2020

Matthias Runge

Config snippets for collectd configured by TripleO

This is mostly a brain dump for myself for later reference, but may be also useful for others.

As I wrote in an earlier post, collectd is configured on OpenStack TripleO driven deployments by a config file.

parameter_defaults:
    CollectdExtraPlugins:
        - write_http
    ExtraConfig:
        collectd::plugin::write_http::nodes:
            collectd:
                url: collectd1.tld.org
                metrics: true
                header: foobar

The collectd exec plugin comes handy when launching some third party script. However, the config may be a bit tricky, for example to execute /usr/bin/true one would insert

parameter_defaults:
    CollectdExtraPlugins:
     - exec
    ExtraConfig:
      collectd::plugin::exec::commands:
        healthcheck:
          user: "collectd"
          group: "collectd"
          exec: ["/usr/bin/true",]

by mrunge at July 15, 2020 01:00 PM

June 18, 2020

Groningen Rain

Dutch Lock Down Day Ninety Five

[See You Next Year!] Sadly, the recording is not available. Yet.

by K Rain at June 18, 2020 06:39 PM

June 15, 2020

Groningen Rain

June 07, 2020

Lars Kellogg-Stedman

Grove Beginner Kit for Arduino (part 2): First look

The folks at Seeed Studio were kind enough to send me a Grove Beginner Kit for Arduino for review. That’s a mouthful of a name for a compact little kit! The Grove Beginner Kit for Arduino (henceforth “the Kit”, because ain’t nobody got time to type that out more than a few times in a single article) is about 8.5 x 5 x 1 inches. Closed, you could fit two of them on a piece of 8.

June 07, 2020 12:00 AM

June 03, 2020

John Likes OpenStack

May 28, 2020

RDO Blog

RDO Ussuri Released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Ussuri for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Ussuri is the 21st release from the OpenStack project, which is the work of more than 1,000 contributors from around the world.

The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/8/cloud/x86_64/openstack-ussuri/.

The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.

All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.

PLEASE NOTE: At this time, RDO Ussuri provides packages for CentOS8 only. Please use the previous release, Train, for CentOS7 and python 2.7.

Interesting things in the Ussuri release include:
  • Within the Ironic project, a bare metal service that is capable of managing and provisioning physical machines in a security-aware and fault-tolerant manner, UEFI and device selection is now available for Software RAID.
  • The Kolla project, the containerised deployment of OpenStack used to provide production-ready containers and deployment tools for operating OpenStack clouds, streamlined the configuration of external Ceph integration, making it easy to go from Ceph-Ansible-deployed Ceph cluster to enabling it in OpenStack.
Other improvements include:
  • Support for IPv6 is available within the Kuryr project, the bridge between container framework networking models and OpenStack networking abstractions.
  • Other highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/ussuri/highlights.html.
  • A new Neutron driver networking-omnipath has been included in RDO distribution which enables the Omni-Path switching fabric in OpenStack cloud.
  • OVN Neutron driver has been merged in main neutron repository from networking-ovn.
Contributors
During the Ussuri cycle, we saw the following new RDO contributors:
  • Amol Kahat 
  • Artom Lifshitz 
  • Bhagyashri Shewale 
  • Brian Haley 
  • Dan Pawlik 
  • Dmitry Tantsur 
  • Dougal Matthews 
  • Eyal 
  • Harald Jensås 
  • Kevin Carter 
  • Lance Albertson 
  • Martin Schuppert 
  • Mathieu Bultel 
  • Matthias Runge 
  • Miguel Garcia 
  • Riccardo Pittau 
  • Sagi Shnaidman 
  • Sandeep Yadav 
  • SurajP 
  • Toure Dunnon 

Welcome to all of you and Thank You So Much for participating!

But we wouldn’t want to overlook anyone. A super massive Thank You to all 54 contributors who participated in producing this release. This list includes commits to rdo-packages and rdo-infra repositories:

  • Adam Kimball 
  • Alan Bishop 
  • Alan Pevec 
  • Alex Schultz 
  • Alfredo Moralejo 
  • Amol Kahat 
  • Artom Lifshitz 
  • Arx Cruz 
  • Bhagyashri Shewale 
  • Brian Haley 
  • Cédric Jeanneret 
  • Chandan Kumar
  • Dan Pawlik
  • David Moreau Simard 
  • Dmitry Tantsur 
  • Dougal Matthews 
  • Emilien Macchi 
  • Eric Harney 
  • Eyal 
  • Fabien Boucher 
  • Gabriele Cerami 
  • Gael Chamoulaud 
  • Giulio Fidente 
  • Harald Jensås 
  • Jakub Libosvar 
  • Javier Peña 
  • Joel Capitao 
  • Jon Schlueter 
  • Kevin Carter 
  • Lance Albertson 
  • Lee Yarwood 
  • Marc Dequènes (Duck) 
  • Marios Andreou 
  • Martin Mágr 
  • Martin Schuppert 
  • Mathieu Bultel 
  • Matthias Runge 
  • Miguel Garcia 
  • Mike Turek 
  • Nicolas Hicher 
  • Rafael Folco 
  • Riccardo Pittau 
  • Ronelle Landy 
  • Sagi Shnaidman 
  • Sandeep Yadav 
  • Soniya Vyas
  • Sorin Sbarnea 
  • SurajP 
  • Toure Dunnon 
  • Tristan de Cacqueray 
  • Victoria Martinez de la Cruz 
  • Wes Hayutin 
  • Yatin Karel
  • Zoltan Caplovic
The Next Release Cycle
At the end of one release, focus shifts immediately to the next, Victoria, which has an estimated GA the week of 12-16 October 2020. The full schedule is available at https://releases.openstack.org/victoria/schedule.html.

Twice during each release cycle, RDO hosts official Test Days shortly after the first and third milestones; therefore, the upcoming test days are 25-26 June 2020 for Milestone One and 17-18 September 2020 for Milestone Three.

Get Started
There are three ways to get started with RDO.

To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.

For a production deployment of RDO, use the TripleO Quickstart and you’ll be running a production cloud in short order.

Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.

Get Help
The RDO Project participates in a Q&A service at https://ask.openstack.org. We also have our users@lists.rdoproject.org for RDO-specific users and operrators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org.

The #rdo channel on Freenode IRC is also an excellent place to find and give help.

We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues.

Get Involved
To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.

Join us in #rdo and #tripleo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube.

by Iury Gregory Melo Ferreira at May 28, 2020 08:49 AM

May 15, 2020

Groningen Rain

Dutch Lock Down Day Sixty One

Literally JUST NOW everything was shipped back to Red Hat, to the RDO Community, and to colleagues. And I have absolutely no idea what to do with myself.

by K Rain at May 15, 2020 03:34 PM

May 07, 2020

Groningen Rain

Dutch Lock Down Day Fifty Three

"Alternative Title: So Long Farewell Auf Wiedersehen Goodbye"

by K Rain at May 07, 2020 09:54 AM

April 27, 2020

Groningen Rain

Dutch Lock Down Day Forty Three

The Minions are headed back to BSO / daycare / school next week and the looming deadline is taking its toll on the oldest. OR it’s a full moon or something. OR he just doesn’t like King’s Day. #PoorLittle But first the news: King’s Day from your couch – our step by step guide Mayors …

by K Rain at April 27, 2020 02:22 PM

April 23, 2020

Groningen Rain

Dutch Lock Down Day Thirty Nine

"While there's no dedicated 'booth' / chat room for RDO / TripleO / PackStack but I'll be in the community rooms representin' and advocatin'."

by K Rain at April 23, 2020 07:34 PM

April 19, 2020

RDO Blog

Community Blog Round Up 19 April 2020

Photo by Florian Krumm on Unsplash

Three incredible articles by Lars Kellogg-Stedman aka oddbit – mostly about adjustments and such made due to COVID-19. I hope you’re keeping safe at home, RDO Stackers! Wash your hands and enjoy these three fascinating articles about keyboards, arduino and machines that go ping…

Some thoughts on Mechanical Keyboards by oddbit

Since we’re all stuck in the house and working from home these days, I’ve had to make some changes to my home office. One change in particular was requested by my wife, who now shares our rather small home office space with me: after a week or so of calls with me clattering away on my old Das Keyboard 3 Professional in the background, she asked if I could get something that was maybe a little bit quieter.

Read more at https://blog.oddbit.com/post/2020-04-15-some-thoughts-on-mechanical-ke/

Grove Beginner Kit for Arduino (part 1) by oddbit

The folks at Seeed Studio have just released the Grove Beginner Kit for Arduino, and they asked if I would be willing to take a look at it in exchange for a free kit. At first glance it reminds me of the Radio Shack (remember when they were cool?) electronics kit I had when I was a kid – but somewhat more advanced. I’m excited to take a closer look, but given shipping these days means it’s probably a month away at least.

Read more at https://blog.oddbit.com/post/2020-04-15-grove-beginner-kit-for-arduino/

I see you have the machine that goes ping… by oddbit

We’re all looking for ways to keep ourselves occupied these days, and for me that means leaping at the chance to turn a small problem into a slightly ridiculous electronics project. For reasons that I won’t go into here I wanted to generate an alert when a certain WiFi BSSID becomes visible. A simple solution to this problem would have been a few lines of shell script to send me an email…but this article isn’t about simple solutions!

Read more at https://blog.oddbit.com/post/2020-03-20-i-see-you-have-the-machine-tha/

by Rain Leander at April 19, 2020 09:45 AM

April 15, 2020

Lars Kellogg-Stedman

Some thoughts on Mechanical Keyboards

Since we’re all stuck in the house and working from home these days, I’ve had to make some changes to my home office. One change in particular was requested by my wife, who now shares our rather small home office space with me: after a week or so of calls with me clattering away on my old Das Keyboard 3 Professional in the background, she asked if I could get something that was maybe a little bit quieter.

April 15, 2020 12:00 AM

Grove Beginner Kit for Arduino (part 1)

The folks at Seeed Studio have just released the Grove Beginner Kit for Arduino, and they asked if I would be willing to take a look at it in exchange for a free kit. At first glance it reminds me of the Radio Shack (remember when they were cool?) electronics kit I had when I was a kid – but somewhat more advanced. I’m excited to take a closer look, but given shipping these days means it’s probably a month away at least.

April 15, 2020 12:00 AM

March 23, 2020

RDO Blog

Tips, Tricks, and Best Practices for Distributed RDO Teams

While a lot of RDO contributors are remote, there are many more who are not and now find themselves in lock down or working from home due to the coronavirus. A few members of the RDO community requested tips, tricks, and best practices for working on and managing a distributed team.

Connectivity

I mean, obviously, there needs to be enough bandwidth, which might normally be just fine, but if you have a partner and kids also using the internet, video calls might become impossible.

Communicate with the family to work out a schedule or join the call without video so you can still participate.

Manage Expectations

Even if you’re used to being remote AND don’t have a partner / family invading your space, there is added stress in the new reality.

Be sure to manage expectations with your boss about priorities, focus, goals, project tracking, and mental health.

This will be an ongoing conversation that evolves as projects and situations evolve.

Know Thyself

Some people NEED to get ready in the morning, dress in business clothes, and work in a specific space. Some people can wake up, grab their laptop and work from the bed.

Some people NEED to get up once an hour to walk around the block. Some people are content to take a break once every other hour or more.

Some people NEED to physically be in the office around other people. Some will be totally content to work from home.

Sure, some things aren’t optional, but work with what you can.

Figure out what works for you.

Embrace #PhysicalDistance Not #SocialDistance

Remember to stay connected socially with your colleagues. Schedule a meeting without an agenda where you chat about whatever.

Come find the RDO Technical Community Liaison, leanderthal, and your other favorite collaborators on Freenode IRC on channels #rdo and #tripleo.

For that matter, don’t forget to reach out to your friends and family.

Even introverts need to maintain a certain level of connection.

Further Reading

There’s a ton of information about working remotely / distributed productivity and this is, by no means, an exhaustive list, but to get you started:

Now let’s hear from you!

What tips, tricks, and resources do you recommend to work from home, especially in this time of stress? Please add your advice in the comments below.

And, as always, thank you for being a part of the RDO community!

by Rain Leander at March 23, 2020 03:14 PM

March 20, 2020

Lars Kellogg-Stedman

I see you have the machine that goes ping...

We’re all looking for ways to keep ourselves occupied these days, and for me that means leaping at the chance to turn a small problem into a slightly ridiculous electronics project. For reasons that I won’t go into here I wanted to generate an alert when a certain WiFi BSSID becomes visible. A simple solution to this problem would have been a few lines of shell script to send me an email…but this article isn’t about simple solutions!

March 20, 2020 12:00 AM

March 17, 2020

RDO Blog

Community Blog Round Up 17 March 2020

Oddbit writes two incredible articles – one about configuring passwordless consoles for raspberry pi and another about configuring open vswitch with nmcli while Carlos Camacho publishes Emilien Macchi’s deep dive demo on containerized deployment sans Paunch.

A passwordless serial console for your Raspberry Pi by oddbit

legendre on #raspbian asked:

How can i config rasp lite to open a shell on the serial uart on boot? Params are 1200-8-N-1 Dont want login running, just straight to sh

In this article, we’ll walk through one way of implementing this configuration.

Read more at https://blog.oddbit.com/post/2020-02-24-a-passwordless-serial-console/

TripleO deep dive session #14 (Containerized deployments without paunch) by Carlos Camacho

This is the 14th release of the TripleO “Deep Dive” sessions. Thanks to Emilien Macchi for this deep dive session about the status of the containerized deployment without Paunch.

Read more at https://www.anstack.com/blog/2020/02/18/tripleo-deep-dive-session-14.html

Configuring Open vSwitch with nmcli by oddbit

I recently acquired a managed switch for my home office in order to segment a few devices off onto their own isolated vlan. As part of this, I want to expose these vlans on my desktop using Open vSwitch (OVS), and I wanted to implement the configuration using NetworkManager rather than either relying on the legacy /etc/sysconfig/network-scripts scripts or rolling my own set of services. These are my notes in case I ever have to do this again.

Read more at https://blog.oddbit.com/post/2020-02-15-configuring-open-vswitch-with/

by Rain Leander at March 17, 2020 03:23 PM

February 24, 2020

Lars Kellogg-Stedman

A passwordless serial console for your Raspberry Pi

legendre on #raspbian asked: How can i config rasp lite to open a shell on the serial uart on boot? Params are 1200-8-N-1 Dont want login running, just straight to sh In this article, we’ll walk through one way of implementing this configuration. Activate the serial port Raspbian automatically starts a getty on the serial port if one is available. You should see an agetty process associated with your serial port when you run ps -ef.

February 24, 2020 12:00 AM

February 18, 2020

Carlos Camacho

TripleO deep dive session #14 (Containerized deployments without paunch)

This is the 14th release of the TripleO “Deep Dive” sessions

Thanks to Emilien Macchi for this deep dive session about the status of the containerized deployment without Paunch.

You can access the presentation.

So please, check the full session content on the TripleO YouTube channel.



Please check the sessions index to have access to all available content.

by Carlos Camacho at February 18, 2020 12:00 AM

February 15, 2020

Lars Kellogg-Stedman

Configuring Open vSwitch with nmcli

I recently acquired a managed switch for my home office in order to segment a few devices off onto their own isolated vlan. As part of this, I want to expose these vlans on my desktop using Open vSwitch (OVS), and I wanted to implement the configuration using NetworkManager rather than either relying on the legacy /etc/sysconfig/network-scripts scripts or rolling my own set of services. These are my notes in case I ever have to do this again.

February 15, 2020 12:00 AM