Recently, I bought a couple of Raspberry Pi 4, one with 4 GB and 2 equipped with 8 GB of RAM. When I bought the first one, there was no option to get bigger memory. However, I saw this as a game and thought to give this a try. I also bought SSDs for these and USB3 to SATA adapters. Before purchasing anything, you may want to take a look at James Archers page. Unfortunately, there are a couple on adapters on the marked, which don't work that well.
Initially, I followed the description to deploy Fedora 32; it works the same way for Fedora 33 Server (in my case here).
Because ceph requires a partition (or better: a whole disk), I used the traditional setup using partitions and no LVM.
git clone https://github.com/kubernetes-sigs/kubespray
cd kubespray
I followed the documentation and created an inventory. For the container
runtime, I picked crio
, and as calico
as network plugin.
Because of an issue,
I had to patch roles/download/defaults/main.yml
:
diff --git a/roles/download/defaults/main.yml b/roles/download/defaults/main.yml
index a97be5a6..d4abb341 100644
--- a/roles/download/defaults/main.yml
+++ b/roles/download/defaults/main.yml
@@ -64,7 +64,7 @@ quay_image_repo: "quay.io"
# TODO(mattymo): Move calico versions to roles/network_plugins/calico/defaults
# after migration to container download
-calico_version: "v3.16.5"
+calico_version: "v3.15.2"
calico_ctl_version: "{{ calico_version }}"
calico_cni_version: "{{ calico_version }}"
calico_policy_version: "{{ calico_version }}"
@@ -520,13 +520,13 @@ etcd_image_tag: "{{ etcd_version }}{%- if image_arch != 'amd64' -%}-{{ image_arc
flannel_image_repo: "{{ quay_image_repo }}/coreos/flannel"
flannel_image_tag: "{{ flannel_version }}"
calico_node_image_repo: "{{ quay_image_repo }}/calico/node"
-calico_node_image_tag: "{{ calico_version }}"
+calico_node_image_tag: "{{ calico_version }}-arm64"
calico_cni_image_repo: "{{ quay_image_repo }}/calico/cni"
-calico_cni_image_tag: "{{ calico_cni_version }}"
+calico_cni_image_tag: "{{ calico_cni_version }}-arm64"
calico_policy_image_repo: "{{ quay_image_repo }}/calico/kube-controllers"
-calico_policy_image_tag: "{{ calico_policy_version }}"
+calico_policy_image_tag: "{{ calico_policy_version }}-arm64"
calico_typha_image_repo: "{{ quay_image_repo }}/calico/typha"
-calico_typha_image_tag: "{{ calico_typha_version }}"
+calico_typha_image_tag: "{{ calico_typha_version }}-arm64"
pod_infra_image_repo: "{{ kube_image_repo }}/pause"
pod_infra_image_tag: "{{ pod_infra_version }}"
install_socat_image_repo: "{{ docker_image_repo }}/xueshanf/install-socat"
Ceph requires a raw partition. Make sure, you have an empty partition available.
[root@node1 ~]# lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
sda
├─sda1
│ vfat FAT32 UEFI 7DC7-A592
├─sda2
│ vfat FAT32 CB75-24A9 567.9M 1% /boot/efi
├─sda3
│ xfs cab851cb-1910-453b-ae98-f6a2abc7f0e0 804.7M 23% /boot
├─sda4
│
├─sda5
│ xfs 6618a668-f165-48cc-9441-98f4e2cc0340 27.6G 45% /
└─sda6
In my case, there are sda4
and sda6
not formatted. sda4
is very
small and will be ignored, sda6
will be used.
Using rook is pretty straightforward
git clone --single-branch --branch v1.5.4 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/ceph
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml
While reviewing the comments on the Ironic spec, for Secure RBAC. I had to ask myself if the “project” construct makes sense for Ironic. I still think it does, but I’ll write this down to see if I can clarify it for me, and maybe for you, too.
Baremetal servers change. The whole point of Ironic is to control the change of Baremetal servers from inanimate pieces of metal to “really useful engines.” This needs to happen in a controlled and unsurprising way.
Ironic the server does what it is told. If a new piece of metal starts sending out DHCP requests, Ironic is going to PXE boot it. This is the start of this new piece of metals journey of self discovery. At least as far as Ironic is concerned.
But really, someone had to rack and wire said piece of metal. Likely the person that did this is not the person that is going to run workloads on it in the end. They might not even work for the same company; they might be a delivery person from Dell or Supermicro. So, once they are done with it, they don’t own it any more.
Who does? Who owns a piece of metal before it is enrolled in the OpenStack baremetal service?
No one. It does not exist.
Ok, so lets go back to someone pushing the button, booting our server for the first time, and it doing its PXE boot thing.
Or, we get the MAC address and enter that into the ironic database, so that when it does boot, we know about it.
Either way, Ironic is really the playground monitor, just making sure it plays nice.
What if Ironic is a multi-tenant system? Someone needs to be able to transfer the baremetal server from where ever it lands up front to the people that need to use it.
I suspect that ransferring metal from project to project is going to be one of the main use cases after the sun has set on day one.
So, who should be allowed to say what project a piece of baremetal can go to?
Well, in Keystone, we have the idea of hierarchy. A Project is owned by a domain, and a project can be nested inside another project.
But this information is not passed down to Ironic. There is no way to get a token for a project that shows its parent information. But a remote service could query the project hierarchy from Keystone.
Say I want to transfer a piece of metal from one project to another. Should I have a token for the source project or the remote project. Ok, dump question, I should definitely have a token for the source project. The smart question is whether I should also have a token for the destination project.
Sure, why not. Two tokens. One has the “delete” role and one that has the “create” role.
The only problem is that nothing like this exists in Open Stack. But it should.
We could fake it with hierarchy; I can pass things up and down the project tree. But that really does not one bit of good. People don’t really use the tree like that. They should. We built a perfectly nice tree and they ignore it. Poor, ignored, sad, lonely tree.
Actually, it has no feelings. Please stop anthropomorphising the tree.
What you could do is create the destination object, kind of a potential piece-of-metal or metal-receiver. This receiver object gets a UUID. You pass this UUID to the “move” API. But you call the MOVE api with a token for the source project. The move is done atomically. Lets call this thing identified by a UUID a move-request.
The order of operations could be done in reverse. The operator could create the move request on the source, and then pass that to the receiver. This might actually make mores sense, as you need to know about the object before you can even think to move it.
Both workflows seem to have merit.
And…this concept seems to be something that OpenStack needs in general.
Infact, why should the API not be a generic API. I mean, it would have to be per service, but the same API could be used to transfer VMs between projects in Nova nad between Volumes in Cinder. The API would have two verbs one for creating a new move request, and one for accepting it.
POST /thingy/v3.14/resource?resource_id=abcd&destination=project_id
If this is called with a token, it needs to be scoped. If it is scoped to the project_id in the API, it creates a receiving type request. If it is scoped to the project_id that owns the resource, it is a sending type request. Either way, it returns an URL. Call GET on that URL and you get information about the transfer. Call PATCH on it with the appropriately scoped token, and the resource is transferred. And maybe enough information to prove that you know what you are doing: maybe you have to specify the source and target projects in that patch request.
A foolish consistency is the hobgoblin of little minds.
Edit: OK, this is not a new idea. Cinder went through the same thought process according to Duncan Thomas. The result is this API: https://docs.openstack.org/api-ref/block-storage/v3/index.html#volume-transfer
Which looks like it then morphed to this one:
https://docs.openstack.org/api-ref/block-storage/v3/index.html#volume-transfers-volume-transfers-3-55-or-later
I've had to re-teach myself how to do this so I'm writing my own notes.
Prerequisites:
Once you have your environment ready run a test with the name from step 3.
Some tests in CI are configured to use `--skip-tags`. You can do this for your local tests too by setting the appropriate environment variables. For example:
./scripts/run-local-test tripleo_derived_parameters
export TRIPLEO_JOB_ANSIBLE_ARGS="--skip-tags run_ceph_ansible,run_uuid_ansible,ceph_client_rsync,clean_fetch_dir"
./scripts/run-local-test tripleo_ceph_run_ansible
by Unknown (noreply@blogger.com) at December 15, 2020 03:46 PM
Look back at our Pushing Keystone over the Edge presentation from the OpenStack Summit. Many of the points we make are problems faced by any application trying to scale across multiple datacenters. Cassandra is a database designed to deal with this level of scale. So Cassandra may well be a better choice than MySQL or other RDBMS as a datastore to Keystone. What would it take to enable Cassandra support for Keystone?
Lets start with the easy part: defining the tables. Lets look at how we define the Federation back end for SQL. We use SQL Alchemy to handle the migrations: we will need something comparable for Cassandra Query Language (CQL) but we also need to translate the table definitions themselves.
Before we create the tables, we need to create keyspace. I am going to make separate keyspaces for each of the subsystems in Keystone: Identity, Assignment, Federation, and so on. Here’s the Federated one:
CREATE KEYSPACE keystone_federation WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': '3'} AND durable_writes = true;
The Identity provider table is defined like this:
idp_table = sql.Table(
'identity_provider',
meta,
sql.Column('id', sql.String(64), primary_key=True),
sql.Column('enabled', sql.Boolean, nullable=False),
sql.Column('description', sql.Text(), nullable=True),
mysql_engine='InnoDB',
mysql_charset='utf8')
idp_table.create(migrate_engine, checkfirst=True)
The comparable CQL to create a table would look like this:
CREATE TABLE identity_provider (id text PRIMARY KEY , enables boolean , description text);
However, when I describe the schema to view the table defintion, we see that there are many tuning and configuration parameters that are defaulted:
CREATE TABLE federation.identity_provider (
id text PRIMARY KEY,
description text,
enables boolean
) WITH additional_write_policy = '99p'
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND cdc = false
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '16', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND default_time_to_live = 0
AND extensions = {}
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair = 'BLOCKING'
AND speculative_retry = '99p';
I don’t know Cassandra well enough to say if these are sane defaults to have in production. I do know that someone, somewhere, is going to want to tweak them, and we are going to have to provide a means to do so without battling the upgrade scripts. I suspect we are going to want to only use the short form (what I typed into the CQL prompt) in the migrations, not the form with all of the options. In addition, we might want an if not exists clause on the table creation to allow people to make these changes themselves. Then again, that might make things get out of sync. Hmmm.
There are three more entities in this back end:
CREATE TABLE federation_protocol (id text, idp_id text, mapping_id text, PRIMARY KEY(id, idp_id) );
cqlsh:federation> CREATE TABLE mapping (id text primary key, rules text, );
CREATE TABLE service_provider ( auth_url text, id text primary key, enabled boolean, description text, sp_url text, RELAY_STATE_PREFIX text);
One thing that is interesting is that we will not be limiting the ID fields to 32, 64, or 128 characters. There is no performance benefit to doing so in Cassandra, nor is there any way to enforce the length limits. From a Keystone perspective, there is not much value either; we still need to validate the UUIDs in Python code. We could autogenerate the UUIDs in Cassandra, and there might be some benefit to that, but it would diverge from the logic in the Keystone code, and explode the test matrix.
There is only one foreign key in the SQL section; the federation protocol has an idp_id that points to the identity provider table. We’ll have to accept this limitation and ensure the integrity is maintained in code. We can do this by looking up the Identity provider before inserting the protocol entry. Since creating a Federated entity is a rare and administrative task, the risk here is vanishingly small. It will be more significant elsewhere.
For access to the database, we should probably use Flask-CQLAlchemy. Fortunately, Keystone is already a Flask based project, so this makes the two projects align.
For migration support, It looks like the best option out there is cassandra-migrate.
An effort like this would best be started out of tree, with an expectation that it would be merged in once it had shown a degree of maturity. Thus, I would put it into a namespace that would not conflict with the existing keystone project. The python imports would look like:
from keystone.cassandra import migrations
from keystone.cassandra import identity
from keystone.cassandra import federation
This could go in its own git repo and be separately pip installed for development. The entrypoints would be registered such that the configuration file would have entries like:
[application_credential] driver = cassandraAny tuning of the database could be put under a [cassandra] section of the conf file, or tuning for individual sections could be in keys prefixed with cassanda_ in the appropriate sections, such as application_credentials as shown above.
It might be interesting to implement a Cassandra token backend and use the default_time_to_live value on the table to control the lifespan and automate the cleanup of the tables. This might provide some performance benefit over the fernet approach, as the token data would be cached. However, the drawbacks due to token invalidation upon change of data would far outweigh the benefits unless the TTL was very short, perhaps 5 minutes.
Just making it work is one thing. In a follow on article, I’d like to go through what it would take to stretch a cluster from one datacenter to another, and to make sure that the other considerations that we discussed in that presentation are covered.
Feedback?
RDO Victoria Released
The RDO community is pleased to announce the general availability of the RDO build for OpenStack Victoria for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Victoria is the 22nd release from the OpenStack project, which is the work of more than 1,000 contributors from around the world.
The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/8/cloud/x86_64/openstack-victoria/.
The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.
All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.
PLEASE NOTE: RDO Victoria provides packages for CentOS8 and python 3 only. Please use the Train release, for CentOS7 and python 2.7.
Interesting things in the Victoria release include:
Other highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/victoria/highlights.
Contributors
During the Victoria cycle, we saw the following new RDO contributors:
Amy Marrich (spotz)
Daniel Pawlik
Douglas Mendizábal
Lance Bragstad
Martin Chacon Piza
Paul Leimer
Pooja Jadhav
Qianbiao NG
Rajini Karthik
Sandeep Yadav
Sergii Golovatiuk
Steve Baker
Welcome to all of you and Thank You So Much for participating!
But we wouldn’t want to overlook anyone. A super massive Thank You to all 58 contributors who participated in producing this release. This list includes commits to rdo-packages, rdo-infra, and redhat-website repositories:
Adam Kimball
Ade Lee
Alan Pevec
Alex Schultz
Alfredo Moralejo
Amol Kahat
Amy Marrich (spotz)
Arx Cruz
Bhagyashri Shewale
Bogdan Dobrelya
Cédric Jeanneret
Chandan Kumar
Damien Ciabrini
Daniel Pawlik
Dmitry Tantsur
Douglas Mendizábal
Emilien Macchi
Eric Harney
Francesco Pantano
Gabriele Cerami
Gael Chamoulaud
Gorka Eguileor
Grzegorz Grasza
Harald Jensås
Iury Gregory Melo Ferreira
Jakub Libosvar
Javier Pena
Joel Capitao
Jon Schlueter
Lance Bragstad
Lon Hohberger
Luigi Toscano
Marios Andreou
Martin Chacon Piza
Mathieu Bultel
Matthias Runge
Michele Baldessari
Mike Turek
Nicolas Hicher
Paul Leimer
Pooja Jadhav
Qianbiao.NG
Rabi Mishra
Rafael Folco
Rain Leander
Rajini Karthik
Riccardo Pittau
Ronelle Landy
Sagi Shnaidman
Sandeep Yadav
Sergii Golovatiuk
Slawek Kaplonski
Soniya Vyas
Sorin Sbarnea
Steve Baker
Tobias Urdin
Wes Hayutin
Yatin Karel
The Next Release Cycle
At the end of one release, focus shifts immediately to the next release i.e Wallaby.
Get Started
There are three ways to get started with RDO.
To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.
For a production deployment of RDO, use TripleO and you’ll be running a production cloud in short order.
Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.
Get Help
The RDO Project has our users@lists.rdoproject.org for RDO-specific users and operators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org.
The #rdo channel on Freenode IRC is also an excellent place to find and give help.
We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues.
Get Involved
To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.
Join us in #rdo and #tripleo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube.
TheJulia was kind enough to update the docs for Ironic to show me how to include IPMI information when creating nodes.
for UUID in `openstack baremetal node list -f json | jq -r '.[] | .UUID' ` ; do openstack baremetal node delete $UUID; done
I removed the ipmi common data from each definition as there is a password there, and I will set that afterwards on all nodes.
{
"nodes": [
{
"ports": [
{
"address": "00:21:9b:93:d0:90"
}
],
"name": "zygarde",
"driver": "ipmi",
"driver_info": {
"ipmi_address": "192.168.123.10"
}
},
{
"ports": [
{
"address": "00:21:9b:9b:c4:21"
}
],
"name": "umbreon",
"driver": "ipmi",
"driver_info": {
"ipmi_address": "192.168.123.11"
}
},
{
"ports": [
{
"address": "00:21:9b:98:a3:1f"
}
],
"name": "zubat",
"driver": "ipmi",
"driver_info": {
"ipmi_address": "192.168.123.12"
}
}
]
}
openstack baremetal create ./nodes.ipmi.json
$ openstack baremetal node list
+--------------------------------------+---------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+---------+---------------+-------------+--------------------+-------------+
| 3fa4feae-0d5c-4e38-a012-29258d40651b | zygarde | None | None | enroll | False |
| 00965ad4-c972-46fa-948a-3ce87aecf5ac | umbreon | None | None | enroll | False |
| 8702ea0c-aa10-4542-9292-3b464fe72036 | zubat | None | None | enroll | False |
+--------------------------------------+---------+---------------+-------------+--------------------+-------------+
for UUID in `openstack baremetal node list -f json | jq -r '.[] | .UUID' ` ;
do openstack baremetal node set $UUID --driver-info ipmi_password=`cat ~/ipmi.password` --driver-info ipmi_username=admin ;
done
EDIT: I had ipmi_user before and it does not work. Needs to be ipmi_username.
And if I look in the returned data for the definition, we see the password is not readable:
$ openstack baremetal node show zubat -f yaml | grep ipmi_password
ipmi_password: '******'
for UUID in `openstack baremetal node list -f json | jq -r '.[] | .UUID' ` ; do openstack baremetal node power on $UUID ; done
Change “on” to “off” to power off.
“I can do any thing. I can’t do everything.”
The sheer number of projects and problem domains covered by OpenStack was overwhelming. I never learned several of the other projects under the big tent. One project that is getting relevant to my day job is Ironic, the bare metal provisioning service. Here are my notes from spelunking the code.
I want just Ironic. I don’t want Keystone (personal grudge) or Glance or Neutron or Nova.
Ironic will write files to e.g. /var/lib/tftp and /var/www/html/pxe and will not handle DHCP, but can make sue of static DHCP configurations.
Ironic is just an API server at this point ( python based web service) that manages the above files, and that can also talk to the IPMI ports on my servers to wake them up and perform configurations on them.
I need to provide ISO images to Ironic so it can put the in the right place to boot them
I checked the code out of git. I am working off the master branch.
I ran tox to ensure the unit tests are all at 100%
I have mysql already installed and running, but with a Keystone Database. I need to make a new one for ironic. The database name, user, and password are all going to be ironic, to keep things simple.
CREATE USER 'ironic'@'localhost' IDENTIFIED BY 'ironic';
create database ironic;
GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost';
FLUSH PRIVILEGES;
Note that I did this as the Keystone user. That dude has way to much privilege….good thing this is JUST for DEVELOPMENT. This will be used to follow the steps in the developers quickstart docs. I also set the mysql URL in the config file to this
connection = mysql+pymysql://ironic:ironic@localhost/ironic
Then I can run ironic db sync. Lets’ see what I got:
mysql ironic --user ironic --password
#....
MariaDB [ironic]> show tables;
+-------------------------------+
| Tables_in_ironic |
+-------------------------------+
| alembic_version |
| allocations |
| bios_settings |
| chassis |
| conductor_hardware_interfaces |
| conductors |
| deploy_template_steps |
| deploy_templates |
| node_tags |
| node_traits |
| nodes |
| portgroups |
| ports |
| volume_connectors |
| volume_targets |
+-------------------------------+
15 rows in set (0.000 sec)
OK, so the first table shows that Ironic uses Alembic to manage migrations. Unlike the SQLAlchemy migrations table, you can’t just query this table to see how many migrations have been performed:
MariaDB [ironic]> select * from alembic_version;
+--------------+
| version_num |
+--------------+
| cf1a80fdb352 |
+--------------+
1 row in set (0.000 sec)
The script to start the API server is:
ironic-api -d --config-file etc/ironic/ironic.conf.local
Looking in the file requirements.txt, I see that they Web framework for Ironic is Pecan:
$ grep pecan requirements.txt
pecan!=1.0.2,!=1.0.3,!=1.0.4,!=1.2,>=1.0.0 # BSD
This is new to me. On Keystone, we converted from no framework to Flask. I’m guessing that if I look in the chain that starts with ironic-api file, I will see a Pecan launcher for a web application. We can find that file with
$which ironic-api
/opt/stack/ironic/.tox/py3/bin/ironic-api
Looking in that file, it references ironic.cmd.api, which is the file ironic/cmd/api.py which in turn refers to ironic/common/wsgi_service.py. This in turn refers to ironic/api/app.py from which we can finally see that it imports pecan.
Now I am ready to run the two services. Like most of OpenStack, there is an API server and a “worker” server. In Ironic, this is called the Conductor. This maps fairly well to the Operator pattern in Kubernetes. In this pattern, the user makes changes to the API server via a web VERB on a URL, possibly with a body. These changes represent a desired state. The state change is then performed asynchronously. In OpenStack, the asynchronous communication is performed via a message queue, usually Rabbit MQ. The Ironic team has a simpler mechanism used for development; JSON RPC. This happens to be the same mechanism used in FreeIPA.
OK, once I got the service running, I had to do a little fiddling around to get the command lines to work. The was an old reference to
OS_AUTH_TYPE=token_endpoint
which needed to be replaces with
OS_AUTH_TYPE=none
Both are in the documentation, but only the second one will work.
I can run the following commands:
$ baremetal driver list
+---------------------+----------------+
| Supported driver(s) | Active host(s) |
+---------------------+----------------+
| fake-hardware | ayoungP40 |
+---------------------+----------------+
$ baremetal node list
Lets see if I can figure out from CURL what APIs those are…There is only one version, and one link, so:
curl http://127.0.0.1:6385 | jq '.versions | .[] | .links | .[] | .href'
"http://127.0.0.1:6385/v1/"
Doing curl against that second link gives a list of the top level resources:
And I assume that, if I use curl to GET the drivers, I should see the fake driver entry from above:
$ curl "http://127.0.0.1:6385/v1/drivers" | jq '.drivers |.[] |.name'
"fake-hardware"
OK, that is enough to get started. I am going to try and do the same with the RPMs that we ship with OSP and see what I get there.
But that is a tale for another day.
I had a conversation I had with Julia Kreger, a long time core member of the Ironic project. This helped get me oriented.
Render changes to tripleo docs:
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python3 get-pip.py
pip install tox
Check syntax errors before wasting CI time
cd /home/stack/tripleo-docs
tox -e deploy-guide
Run a specific unit test
tox -e linters
tox -e pep8
cd /home/stack/tripleo-common
tox -e py36 -- tripleo_common.tests.test_inventory.TestInventory.test_get_roles_by_service
cd /home/stack/tripleo-ansible
tox -e py36 -- tripleo_ansible.tests.modules.test_derive_hci_parameters.TestTripleoDeriveHciParameters
by Unknown (noreply@blogger.com) at September 04, 2020 06:31 PM
This is mostly a brain dump for myself for later reference, but may be also useful for others.
As I wrote in an earlier post, collectd is configured on OpenStack TripleO driven deployments by a config file.
parameter_defaults: CollectdExtraPlugins: - write_http ExtraConfig: collectd::plugin::write_http::nodes: collectd: url: collectd1.tld.org metrics: true header: foobar
The collectd exec plugin comes handy when launching some third party script. However, the config may be a bit tricky, for example to execute /usr/bin/true one would insert
parameter_defaults: CollectdExtraPlugins: - exec ExtraConfig: collectd::plugin::exec::commands: healthcheck: user: "collectd" group: "collectd" exec: ["/usr/bin/true",]
Photo by Florian Krumm on Unsplash
Three incredible articles by Lars Kellogg-Stedman aka oddbit – mostly about adjustments and such made due to COVID-19. I hope you’re keeping safe at home, RDO Stackers! Wash your hands and enjoy these three fascinating articles about keyboards, arduino and machines that go ping…
Some thoughts on Mechanical Keyboards by oddbit
Since we’re all stuck in the house and working from home these days, I’ve had to make some changes to my home office. One change in particular was requested by my wife, who now shares our rather small home office space with me: after a week or so of calls with me clattering away on my old Das Keyboard 3 Professional in the background, she asked if I could get something that was maybe a little bit quieter.
Read more at https://blog.oddbit.com/post/2020-04-15-some-thoughts-on-mechanical-ke/
Grove Beginner Kit for Arduino (part 1) by oddbit
The folks at Seeed Studio have just released the Grove Beginner Kit for Arduino, and they asked if I would be willing to take a look at it in exchange for a free kit. At first glance it reminds me of the Radio Shack (remember when they were cool?) electronics kit I had when I was a kid – but somewhat more advanced. I’m excited to take a closer look, but given shipping these days means it’s probably a month away at least.
Read more at https://blog.oddbit.com/post/2020-04-15-grove-beginner-kit-for-arduino/
I see you have the machine that goes ping… by oddbit
We’re all looking for ways to keep ourselves occupied these days, and for me that means leaping at the chance to turn a small problem into a slightly ridiculous electronics project. For reasons that I won’t go into here I wanted to generate an alert when a certain WiFi BSSID becomes visible. A simple solution to this problem would have been a few lines of shell script to send me an email…but this article isn’t about simple solutions!
Read more at https://blog.oddbit.com/post/2020-03-20-i-see-you-have-the-machine-tha/
While a lot of RDO contributors are remote, there are many more who are not and now find themselves in lock down or working from home due to the coronavirus. A few members of the RDO community requested tips, tricks, and best practices for working on and managing a distributed team.
I mean, obviously, there needs to be enough bandwidth, which might normally be just fine, but if you have a partner and kids also using the internet, video calls might become impossible.
Communicate with the family to work out a schedule or join the call without video so you can still participate.
Even if you’re used to being remote AND don’t have a partner / family invading your space, there is added stress in the new reality.
Be sure to manage expectations with your boss about priorities, focus, goals, project tracking, and mental health.
This will be an ongoing conversation that evolves as projects and situations evolve.
Some people NEED to get ready in the morning, dress in business clothes, and work in a specific space. Some people can wake up, grab their laptop and work from the bed.
Some people NEED to get up once an hour to walk around the block. Some people are content to take a break once every other hour or more.
Some people NEED to physically be in the office around other people. Some will be totally content to work from home.
Sure, some things aren’t optional, but work with what you can.
Figure out what works for you.
Remember to stay connected socially with your colleagues. Schedule a meeting without an agenda where you chat about whatever.
Come find the RDO Technical Community Liaison, leanderthal, and your other favorite collaborators on Freenode IRC on channels #rdo and #tripleo.
For that matter, don’t forget to reach out to your friends and family.
Even introverts need to maintain a certain level of connection.
There’s a ton of information about working remotely / distributed productivity and this is, by no means, an exhaustive list, but to get you started:
Now let’s hear from you!
What tips, tricks, and resources do you recommend to work from home, especially in this time of stress? Please add your advice in the comments below.
And, as always, thank you for being a part of the RDO community!
Oddbit writes two incredible articles – one about configuring passwordless consoles for raspberry pi and another about configuring open vswitch with nmcli while Carlos Camacho publishes Emilien Macchi’s deep dive demo on containerized deployment sans Paunch.
A passwordless serial console for your Raspberry Pi by oddbit
legendre on #raspbian asked:
How can i config rasp lite to open a shell on the serial uart on boot? Params are 1200-8-N-1 Dont want login running, just straight to sh
In this article, we’ll walk through one way of implementing this configuration.
Read more at https://blog.oddbit.com/post/2020-02-24-a-passwordless-serial-console/
TripleO deep dive session #14 (Containerized deployments without paunch) by Carlos Camacho
This is the 14th release of the TripleO “Deep Dive” sessions. Thanks to Emilien Macchi for this deep dive session about the status of the containerized deployment without Paunch.
Read more at https://www.anstack.com/blog/2020/02/18/tripleo-deep-dive-session-14.html
Configuring Open vSwitch with nmcli by oddbit
I recently acquired a managed switch for my home office in order to segment a few devices off onto their own isolated vlan. As part of this, I want to expose these vlans on my desktop using Open vSwitch (OVS), and I wanted to implement the configuration using NetworkManager rather than either relying on the legacy /etc/sysconfig/network-scripts scripts or rolling my own set of services. These are my notes in case I ever have to do this again.
Read more at https://blog.oddbit.com/post/2020-02-15-configuring-open-vswitch-with/
This is the 14th release of the TripleO “Deep Dive” sessions
Thanks to Emilien Macchi for this deep dive session about the status of the containerized deployment without Paunch.
You can access the presentation.
So please, check the full session content on the TripleO YouTube channel.
Please check the sessions index to have access to all available content.
We’re super chuffed to see another THREE posts from our illustrious community – Adam Young talks about api port failure and speed bumps while Lars explores literate programming.
Shift on Stack: api_port failure by Adam Young
I finally got a right-sized flavor for an OpenShift deployment: 25 GB Disk, 4 VCPU, 16 GB Ram. With that, I tore down the old cluster and tried to redeploy. Right now, the deploy is failing at the stage of the controller nodes querying the API port. What is going on?
Read more at https://adam.younglogic.com/2020/01/shift-on-stack-api_port-failure/
Self Service Speedbumps by Adam Young
The OpenShift installer is fairly specific in what it requires, and will not install into a virtual machine that does not have sufficient resources. These limits are 16 GB RAM, 4 Virtual CPUs, and 25 GB Disk Space. This is fairly frustrating if your cloud provider does not give you a flavor that matches this. The last item specifically is an artificial limitation as you can always create an additional disk and mount it, but the installer does not know to do that.
Read more at https://adam.younglogic.com/2020/01/self-service-speedbumps/
Snarl: A tool for literate blogging by Lars Kellogg-Stedman
Literate programming is a programming paradigm introduced by Donald Knuth in which a program is combined with its documentation to form a single document. Tools are then used to extract the documentation for viewing or typesetting or to extract the program code so it can be compiled and/or run. While I have never been very enthusiastic about literate programming as a development methodology, I was recently inspired to explore these ideas as they relate to the sort of technical writing I do for this blog.
Read more at https://blog.oddbit.com/post/2020-01-15-snarl-a-tool-for-literate-blog/
I finally got a right-sized flavor for an OpenShift deployment: 25 GB Disk, 4 VCPU, 16 GB Ram. With that, I tore down the old cluster and tried to redeploy. Right now, the deploy is failing at the stage of the controller nodes querying the API port. What is going on?
Here is the reported error on the console:
The IP address of 10.0.0.5 is attached to the following port:
$ openstack port list | grep "0.0.5"
| da4e74b5-7ab0-4961-a09f-8d3492c441d4 | demo-2tlt4-api-port | fa:16:3e:b6:ed:f8 | ip_address='10.0.0.5', subnet_id='50a5dc8e-bc79-421b-aa53-31ddcb5cf694' | DOWN |
That final “DOWN” is the port state. It is also showing as detached. It is on the internal network:
Looking at the installer code, the one place I can find a reference to the api_port is in the template data/data/openstack/topology/private-network.tf used to build the value openstack_networking_port_v2. This value is used quite heavily in the rest of the installers’ Go code.
Looking in the terraform data built by the installer, I can find references to both the api_port and openstack_networking_port_v2. Specifically, there are several object of type openstack_networking_port_v2 with the names:
$ cat moc/terraform.tfstate | jq -jr '.resources[] | select( .type == "openstack_networking_port_v2") | .name, ", ", .module, "\n" '
api_port, module.topology
bootstrap_port, module.bootstrap
ingress_port, module.topology
masters, module.topology
On a baremetal install, we need an explicit A record for api-int.<cluster_name>.<base_domain>
. That requirement does not exist for OpenStack, however, and I did not have one the last time I installed.
api-int is the internal access to the API server. Since the controllers are hanging trying to talk to it, I assume that we are still at the stage where we are building the control plane, and that it should be pointing at the bootstrap server. However, since the port above is detached, traffic cannot get there. There are a few hypotheses in my head right now:
I’m leaning toward 3 right now.
The install-config.yaml has the line:
octaviaSupport: “1”
But I don’t think any Octavia resources are being used.
$ openstack loadbalancer pool list
$ openstack loadbalancer list
$ openstack loadbalancer flavor list
Not Found (HTTP 404) (Request-ID: req-fcf2709a-c792-42f7-b711-826e8bfa1b11)
The OpenShift installer is fairly specific in what it requires, and will not install into a virtual machine that does not have sufficient resources. These limits are:
This is fairly frustrating if your cloud provider does not give you a flavor that matches this. The last item specifically is an artificial limitation as you can always create an additional disk and mount it, but the installer does not know to do that.
In my case, there is a flavor that almost matches; it has 10 GB of Disk space instead of the required 25. But I cannot use it.
Instead, I have to use a larger flavor that has double the VCPUs, and thus eats up more of my VCPU quota….to the point that I cannot afford more than 4 Virtual machines of this size, and thus cannot create more than one compute node; OpenShift needs 3 nodes for the control plane.
I do not have permissions to create a flavor on this cloud. Thus, my only option is to open a ticket. Which has to be reviewed and acted upon by an administrator. Not a huge deal.
This is how self service breaks down. A non-security decision (link disk size with the other characteristics of a flavor) plus Access Control rules that prevent end users from customizing. So the end user waits for a human to respond
In my case, that means that I have to provide an alternative place to host my demonstration, just in case things don’t happen in time. Which costs my organization money.
This is not a ding on my cloud provider. They have the same OpenStack API as anyone else deploying OpenStack.
This is not a ding on Keystone; create flavor is not a project scoped operation, so I can’t even blame my favorite bug.
This is not a ding on the Nova API. It is reasonable to reserve the ability to create Flavors to system administrators. If instances have storage attached, to provide it in reasonable sized chunks.
My problem just falls at the junction of several different zones of responsibility. It is the overlap that causes the pain in this case. This is not unusual
Would it be possible to have a more granular API, like “create customer flavor” that built a flavor out of pre-canned parts and sizes? Probably. That would solve my problem. I don’t know if this is a general problem, though.
This does seem like it is something that could be addressed by a GitOps type approach. In order to perform an operation like this, I should be able to issue a command that gets checked in to git, confirmed, and posted for code review. An administrator could then confirm or provide an alternative approach. This happens in the ticketing system. It is human-resource-intensive. If no one says “yes” the default is no…and thing just sits there.
What would be a better long term solution? I don’t know. I’m going to let this idea set for a while.
What do you think?
Welcome to the new DECADE! It was super awesome to run the blog script and see not one, not two, but THREE new articles by the amazing Adam Young who tinkered with Keystone, TripleO, and containers over the break. And while Lars only wrote one article, it’s the ultimate guide to the Open Virtual Network within OpenStack. Sit back, relax, and inhale four great articles from the RDO Community.
Running the TripleO Keystone Container in OpenShift by Adam Young
Now that I can run the TripleO version of Keystone via podman, I want to try running it in OpenShift.
Read more at https://adam.younglogic.com/2019/12/running-the-tripleo-keystone-container-in-openshift/
Official TripleO Keystone Images by Adam Young
My recent forays into running containerized Keystone images have been based on a Centos base image with RPMs installed on top of it. But TripleO does not run this way; it runs via containers. Some notes as I look into them.
Read more at https://adam.younglogic.com/2019/12/official-tripleo-keystone-images/
OVN and DHCP: A minimal example by Lars Kellogg-Stedman
Introduction A long time ago, I wrote an article all about OpenStack Neutron (which at that time was called Quantum). That served as an excellent reference for a number of years, but if you’ve deployed a recent version of OpenStack you may have noticed that the network architecture looks completely different. The network namespaces previously used to implement routers and dhcp servers are gone (along with iptables rules and other features), and have been replaced by OVN (“Open Virtual Network”).
Read more at https://blog.oddbit.com/post/2019-12-19-ovn-and-dhcp/
keystone-db-init in OpenShift by Adam Young
Before I can run Keystone in a container, I need to initialize the database. This is as true for running in Kubernetes as it was using podman. Here’s how I got keystone-db-init to work.
Read more at https://adam.younglogic.com/2019/12/keystone-db-init-in-openshift/
Now that I can run the TripleO version of Keystone via podman, I want to try running it in OpenShift.
Here is my first hack at a deployment yaml. Note that it looks really similar to the keystone-db-init I got to run the other day.
If I run it with:
oc create -f keystone-pod.yaml
I get a CrashLoopBackoff error, with the following from the logs:
$ oc logs pod/keystone-api
sudo -E kolla_set_configs
sudo: unable to send audit message: Operation not permitted
INFO:main:Loading config file at /var/lib/kolla/config_files/config.json
ERROR:main:Unexpected error:
Traceback (most recent call last):
File "/usr/local/bin/kolla_set_configs", line 412, in main
config = load_config()
File "/usr/local/bin/kolla_set_configs", line 294, in load_config
config = load_from_file()
File "/usr/local/bin/kolla_set_configs", line 282, in load_from_file
with open(config_file) as f:
IOError: [Errno 2] No such file or directory: '/var/lib/kolla/config_files/config.json'
I modified the config.json to remove steps that were messing me up. I think I can now remove evn that last config file, but I left it for now.
{
"command": "/usr/sbin/httpd",
"config_files": [
{
"source": "/var/lib/kolla/config_files/src/*",
"dest": "/",
"merge": true,
"preserve_properties": true
}
],
"permissions": [
{
"path": "/var/log/kolla/keystone",
"owner": "keystone:keystone",
"recurse": true
}
]
}
I need to add the additional files to a config map and mount those inside the container. For example, I can create a config map with the config.json file, a secret for the Fernet key, and a config map for the apache files.
oc create configmap keystone-files --from-file=config.json=./config.json
kubectl create secret generic keystone-fernet-key --from-file=../kolla/src/etc/keystone/fernet-keys/0
oc create configmap keystone-httpd-files --from-file=wsgi-keystone.conf=../kolla/src/etc/httpd/conf.d/wsgi-keystone.conf
Here is my final pod definition
apiVersion: v1
kind: Pod
metadata:
name: keystone-api
labels:
app: myapp
spec:
containers:
- image: docker.io/tripleomaster/centos-binary-keystone:current-tripleo
imagePullPolicy: Always
name: keystone
env:
- name: KOLLA_CONFIG_FILE
value: "/var/lib/kolla/config_files/src/config.json"
- name: KOLLA_CONFIG_STRATEGY
value: "COPY_ONCE"
volumeMounts:
- name: keystone-conf
mountPath: "/etc/keystone/"
- name: httpd-config
mountPath: "/etc/httpd/conf.d"
- name: config-json
mountPath: "/var/lib/kolla/config_files/src"
- name: keystone-fernet-key
mountPath: "/etc/keystone/fernet-keys/0"
volumes:
- name: keystone-conf
secret:
secretName: keystone-conf
items:
- key: keystone.conf
path: keystone.conf
mode: 511
- name: keystone-fernet-key
secret:
secretName: keystone-fernet-key
items:
- key: "0"
path: "0"
mode: 511
- name: config-json
configMap:
name: keystone-files
- name: httpd-config
configMap:
name: keystone-httpd-files
And show that it works for basic stuff:
$ oc rsh keystone-api
sh-4.2# curl 10.131.1.98:5000
{"versions": {"values": [{"status": "stable", "updated": "2019-07-19T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.13", "links": [{"href": "http://10.131.1.98:5000/v3/", "rel": "self"}]}]}}curl (HTTP://10.131.1.98:5000/): response: 300, time: 3.314, size: 266
Next steps: expose a route, make sure we can get a token.
My recent forays into running containerized Keystone images have been based on a Centos base image with RPMs installed on top of it. But TripleO does not run this way; it runs via containers. Some notes as I look into them.
The official containers for TripleO are currently hosted on docker.com. The Keystone page is here:
Don’t expect the docker pull command posted on that page to work. I tried a comparable one with podman and got:
$ podman pull tripleomaster/centos-binary-keystone
Trying to pull docker.io/tripleomaster/centos-binary-keystone...
manifest unknown: manifest unknown
Trying to pull registry.fedoraproject.org/tripleomaster/centos-binary-keystone...
And a few more lines of error output. Thanks to Emilien M, I was able to get the right command:
$ podman pull tripleomaster/centos-binary-keystone:current-tripleo
Trying to pull docker.io/tripleomaster/centos-binary-keystone:current-tripleo...
Getting image source signatures
...
Copying config 9e85172eba done
Writing manifest to image destination
Storing signatures
9e85172eba10a2648ae7235076ada77b095ed3da05484916381410135cc8884c
Since I did this as a normal account, and not as root, the image does not get stored under /var, but instead goes somewhere under $HOME/.local. If I type
$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/tripleomaster/centos-binary-keystone current-tripleo 9e85172eba10 2 days ago 904 MB
I can see the short form of the hash starting with 9e85. I copy that to use to match the subdir under ls /home/ayoung/.local/share/containers/storage/overlay-image
ls /home/ayoung/.local/share/containers/storage/overlay-images/9e85172eba10a2648ae7235076ada77b095ed3da05484916381410135cc8884c/
If I cat that file, I can see all of the layers that make up the image itself.
Trying a naive: podman run docker.io/tripleomaster/centos-binary-keystone:current-tripleo I get an error that shows just how kolla-centric this image is:
$ podman run docker.io/tripleomaster/centos-binary-keystone:current-tripleo
+ sudo -E kolla_set_configs
INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
ERROR:__main__:Unexpected error:
Traceback (most recent call last):
File "/usr/local/bin/kolla_set_configs", line 412, in main
config = load_config()
File "/usr/local/bin/kolla_set_configs", line 294, in load_config
config = load_from_file()
File "/usr/local/bin/kolla_set_configs", line 282, in load_from_file
with open(config_file) as f:
IOError: [Errno 2] No such file or directory: '/var/lib/kolla/config_files/config.json'
So I read the docs. Trying to fake it with:
$ podman run -e KOLLA_CONFIG='{}' docker.io/tripleomaster/centos-binary-keystone:current-tripleo
+ sudo -E kolla_set_configs
INFO:__main__:Validating config file
ERROR:__main__:InvalidConfig: Config is missing required "command" key
When running with TripleO, The config files are generated from Heat Templates. The values for the config.json come from here.
This gets me slightly closer:
podman run -e KOLLA_CONFIG_STRATEGY=COPY_ONCE -e KOLLA_CONFIG='{"command": "/usr/sbin/httpd"}' docker.io/tripleomaster/centos-binary-keystone:current-tripleo
But I still get an error of “no listening sockets available, shutting down” even if I try this as Root. Below is the whole thing I tried to run.
$ podman run -v $PWD/fernet-keys:/var/lib/kolla/config_files/src/etc/keystone/fernet-keys -e KOLLA_CONFIG_STRATEGY=COPY_ONCE -e KOLLA_CONFIG='{ "command": "/usr/sbin/httpd", "config_files": [ { "source": "/var/lib/kolla/config_files/src/etc/keystone/fernet-keys", "dest": "/etc/keystone/fernet-keys", "owner":"keystone", "merge": false, "perm": "0600" } ], "permissions": [ { "path": "/var/log/kolla/keystone", "owner": "keystone:keystone", "recurse": true } ] }' docker.io/tripleomaster/centos-binary-keystone:current-tripleo
Lets go back to simple things. What is inside the container? We can peek using:
$
podman run docker.io/tripleomaster/centos-binary-keystone:current-tripleo ls
Basically, we can perform any command that will not last longer than the failed kolla initialization. No Bash prompts, but shorter single line bash commands work. We can see that mysql is uninitialized:
podman run docker.io/tripleomaster/centos-binary-keystone:current-tripleo cat /etc/keystone/keystone.conf | grep "connection ="
#connection =
What about those config files that the initialization wants to copy:
podman run docker.io/tripleomaster/centos-binary-keystone:current-tripleo ls /var/lib/kolla/config_files/src/etc/httpd/conf.d
ls: cannot access /var/lib/kolla/config_files/src/etc/httpd/conf.d: No such file or directory
So all that comes from external to the container, and is mounted at run time.
$ podman run docker.io/tripleomaster/centos-binary-keystone:current-tripleo cat /etc/passwd | grep keystone
keystone:x:42425:42425::/var/lib/keystone:/usr/sbin/nologin
Which owns the config and the log files.
$ podman run docker.io/tripleomaster/centos-binary-keystone:current-tripleo ls -la /var/log/keystone
total 8
drwxr-x---. 2 keystone keystone 4096 Dec 17 08:28 .
drwxr-xr-x. 6 root root 4096 Dec 17 08:28 ..
-rw-rw----. 1 root keystone 0 Dec 17 08:28 keystone.log
$ podman run docker.io/tripleomaster/centos-binary-keystone:current-tripleo ls -la /etc/keystone
total 128
drwxr-x---. 2 root keystone 4096 Dec 17 08:28 .
drwxr-xr-x. 2 root root 4096 Dec 19 16:30 ..
-rw-r-----. 1 root keystone 2303 Nov 12 02:15 default_catalog.templates
-rw-r-----. 1 root keystone 104220 Dec 14 01:09 keystone.conf
-rw-r-----. 1 root keystone 1046 Nov 12 02:15 logging.conf
-rw-r-----. 1 root keystone 3 Dec 14 01:09 policy.json
-rw-r-----. 1 keystone keystone 665 Nov 12 02:15 sso_callback_template.html
$ podman run docker.io/tripleomaster/centos-binary-keystone:current-tripleo cat /etc/keystone/policy.json
{}
Yes, policy.json is empty.
Lets go back to the config file. I would rather not have to pass in all the config info as an environment variable each time. If I run as root, I can use the podman bind-mount option to relabel it:
podman run -e KOLLA_CONFIG_FILE=/config.json -e KOLLA_CONFIG_STRATEGY=COPY_ONCE -v $PWD/config.json:/config.json:z docker.io/tripleomaster/centos-binary-keystone:current-tripleo
This eventually fails with the error message “no listening sockets available, shutting down” Which seems to be due to the lack of the httpd.conf entries for keystone:
# podman run -e KOLLA_CONFIG_FILE=/config.json -e KOLLA_CONFIG_STRATEGY=COPY_ONCE -v $PWD/config.json:/config.json:z docker.io/tripleomaster/centos-binary-keystone:current-tripleo ls /etc/httpd/conf.d
auth_mellon.conf
auth_openidc.conf
autoindex.conf
README
ssl.conf
userdir.conf
welcome.conf
The clue seems to be in the Heat Templates. There are a bunch of files that are expected to be in /var/lib/kolla/config_files/src in side the container. Here’s my version of the WSGI config file:
Listen 5000
Listen 35357
ServerSignature Off
ServerTokens Prod
TraceEnable off
ErrorLog /var/log/kolla/keystone/apache-error.log"
CustomLog /var/log/kolla/keystone/apache-access.log" common
LogLevel info
AllowOverride None
Options None
Require all granted
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} python-path=/usr/lib/python2.7/site-packages
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
= 2.4>
ErrorLogFormat "%{cu}t %M"
ErrorLog "/var/log/kolla/keystone/keystone-apache-public-error.log"
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b %D \"%{Referer}i\" \"%{User-Agent}i\"" logformat
CustomLog "/var/log/kolla/keystone/keystone-apache-public-access.log" logformat
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} python-path=/usr/lib/python2.7/site-packages
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
= 2.4>
ErrorLogFormat "%{cu}t %M"
ErrorLog "/var/log/kolla/keystone/keystone-apache-admin-error.log"
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b %D \"%{Referer}i\" \"%{User-Agent}i\"" logformat
CustomLog "/var/log/kolla/keystone/keystone-apache-admin-access.log" logformat
So with a directory structure like this:
C[root@ayoungP40 kolla]find src/ -print
src/
src/etc
src/etc/keystone
src/etc/keystone/fernet-keys
src/etc/keystone/fernet-keys/1
src/etc/keystone/fernet-keys/0
src/etc/httpd
src/etc/httpd/conf.d
src/etc/httpd/conf.d/wsgi-keystone.conf
And a Kolla config.json file like this:
{
"command": "/usr/sbin/httpd",
"config_files": [
{
"source": "/var/lib/kolla/config_files/src/etc/keystone/fernet-keys",
"dest": "/etc/keystone/fernet-keys",
"merge": false,
"preserve_properties": true
},{
"source": "/var/lib/kolla/config_files/src/etc/httpd/conf.d",
"dest": "/etc/httpd/conf.d",
"merge": false,
"preserve_properties": true
},{
"source": "/var/lib/kolla/config_files/src/*",
"dest": "/",
"merge": true,
"preserve_properties": true
}
],
"permissions": [
{
"path": "/var/log/kolla/keystone",
"owner": "keystone:keystone",
"recurse": true
}
]
}
I can run Keystone like this:
podman run -e KOLLA_CONFIG_FILE=/config.json -e KOLLA_CONFIG_STRATEGY=COPY_ONCE -v $PWD/config.json:/config.json:z -v $PWD/src:/var/lib/kolla/config_files/src:z docker.io/tripleomaster/centos-binary-keystone:current-tripleo
Before I can run Keystone in a container, I need to initialize the database. This is as true for running in Kubernetes as it was using podman. Here’s how I got keystone-db-init to work.
The general steps were:
oc delete deploymentconfig.apps.openshift.io/keystone-db-in
To upload the secret.
kubectl create secret generic keystone-conf --from-file=../keystone-db-init/keystone.conf
Here is the yaml definition for the pod
apiVersion: v1
kind: Pod
metadata:
name: keystone-db-init-pod
labels:
app: myapp
spec:
containers:
- image: image-registry.openshift-image-registry.svc:5000/keystone/keystone-db-init
imagePullPolicy: Always
name: keystone-db-init
volumeMounts:
- name: keystone-conf
mountPath: "/etc/keystone/"
volumes:
- name: keystone-conf
secret:
secretName: keystone-conf
items:
- key: keystone.conf
path: keystone.conf
mode: 511
command: ['sh', '-c', 'cat /etc/keystone/keystone.conf']
While this is running as the keystone unix account, I am not certain how that happened. I did use the patch command I talked about earlier on the deployment config, but you can see I am not using that in this pod. That is something I need to straighten out.
To test that the database was initialized:
$ oc get pods -l app=mariadb-keystone
NAME READY STATUS RESTARTS AGE
mariadb-keystone-1-rxgvs 1/1 Running 0 9d
$ oc rsh mariadb-keystone-1-rxgvs
sh-4.2$ mysql -h mariadb-keystone -u keystone -pkeystone keystone
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 908
Server version: 10.2.22-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [keystone]> show tables;
+------------------------------------+
| Tables_in_keystone |
+------------------------------------+
| access_rule |
| access_token |
....
+------------------------------------+
46 rows in set (0.00 sec)
I’ve fooled myself in the past thinking that things have worked when they have note. To make sure I am not doing that now, I dropped the keystone database and recreated it from insider the mysql monitor program. I then re-ran the pod, and was able to see all of the tables.
We’re super chuffed that there’s already another article to read in our weekly blog round up – as we said before, if you write it, we’ll help others see it! But if you don’t write it, well, there’s nothing to set sail. Let’s hear about your latest adventures on the Ussuri river and if you’re NOT in our database, you CAN be by creating a pull request to https://github.com/redhat-openstack/website/blob/master/planet.ini.
Reading keystone.conf in a container by Adam Young
Step 3 of the 12 Factor app is to store config in the environment. For Keystone, the set of configuration options is controlled by the keystone.conf file. In an earlier attempt at containerizing the scripts used to configure Keystone, I had passed an environment variable in to the script that would then be written to the configuration file. I realize now that I want the whole keystone.conf external to the application. This allow me to set any of the configuration options without changing the code in the container. More importantly, it allows me to make the configuration information immutable inside the container, so that the applications cannot be hacked to change their own configuration options.
Read more at https://adam.younglogic.com/2019/12/reading-keystone-conf-in-a-container/
Step 3 of the 12 Factor app is to store config in the environment. For Keystone, the set of configuration options is controlled by the keystone.conf file. In an earlier attempt at containerizing the scripts used to configure Keystone, I had passed an environment variable in to the script that would then be written to the configuration file. I realize now that I want the whole keystone.conf external to the application. This allow me to set any of the configuration options without changing the code in the container. More importantly, it allows me to make the configuration information immutable inside the container, so that the applications cannot be hacked to change their own configuration options.
I was running the pod and mounting the local copy I had of the keystone.conf file using this command line:
podman run --mount type=bind,source=/home/ayoung/devel/container-keystone/keystone-db-init/keystone.conf,destination=/etc/keystone/keystone.conf:Z --add-host keystone-mariadb:10.89.0.47 --network maria-bridge -it localhost/keystone-db-init
It was returning with no output. To diagnose, I added on /bin/bash to the end of the command so I could poke around inside the running container before it exited.
podman run --mount /home/ayoung/devel/container-keystone/keystone-db-init/keystone.conf:/etc/keystone/keystone.conf --add-host keystone-mariadb:10.89.0.47 --network maria-bridge -it localhost/keystone-db-init /bin/bash
Once inside, I was able to look at the keystone log file. A Stack trasce made me realize that I was not able to actually read the file /etc/keystone/keystone.conf. Using ls I would show up like this:
-?????????? ? ? ? ? ? keystone.conf:
It took a lot of trial and error to recitify it including:
The end command looked like this:
podman run -v /home/ayoung/devel/container-keystone/keystone-db-init/keystone.conf:/etc/keystone/keystone.conf:Z -u keystone --add-host keystone-mariadb:10.89.0.47 --network maria-bridge -it localhost/keystone-db-init
Once I had it correct, I could use the /bin/bash executable to again poke around inside the container. From the inside, I could run:
$ keystone-manage db_version
109
$ mysql -h keystone-mariadb -ukeystone -pkeystone keystone -e "show databases;"
+--------------------+
| Database |
+--------------------+
| information_schema |
| keystone |
+--------------------+
Next up is to try this with OpenShift.