1-888-SEMAPHORE
1-888-SEMAPHORE

OpenStack – Take 2 – The Keystone Identity Service

Keystone is, more or less, the glue that ties OpenStack together.  It’s required for any of the individual services to be installed and function together.

Fortunately for us, keystone is basically just a REST API, so it’s very easy to make redundant and there isn’t a whole lot to it.

We’ll start by installing keystone and the python mysql client on all three controller nodes:

apt-get install keystone python-mysqldb

Once that’s done, we need a base configuration for keystone.  There are a lot of default options installed in the config file, but we really only care (for now) about giving it an admin token, and connecting it to our DB and Message queue.  Also, because we’re colocating our load balancers on the controller nodes (something which clearly wouldn’t be done in production), we’re going to shift the ports that keystone is binding to so the real ports are available to HAProxy.  (The default ports are being incremented by 10000 for this.)  Everything else will be left at its default value.

/etc/keystone/keystone.conf: (Note – Commented out default config is left in the file, but not included here)


[DEFAULT]
admin_token=ADMIN
# The port number which the admin service listens on.
admin_port=45357
# The port number which the public service listens on.
public_port=15000
# RabbitMQ HA cluster host:port pairs. (list value)
rabbit_hosts=10.1.1.20:5672,10.1.1.21:5672,10.1.1.22:5672
[database]
connection = mysql://keystone:openstack@10.1.1.10/keystone

We’ll then copy this configuration file to /etc/keystone/keystone.conf on each of the other controller nodes.  (There is no node specific information in our configuration, but if any explicit IP binds or similar host specific statements are made, obviously that needs to be changed from node to node)
Now the we have the config files in place, we can create the DB and DB user, then get the keystone service started and its DB tables populated.  (We’ll be doing all of this from the first controller node)

root@controller-0:~# mysql -u root -popenstack -h 10.1.1.10
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 122381
Server version: 5.5.38-MariaDB-1~trusty-wsrep-log mariadb.org binary distribution, wsrep_25.10.r3997

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MariaDB [(none)]> create database keystone;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL ON keystone.* to keystone@’%’ IDENTIFIED BY ‘openstack’;
Query OK, 0 rows affected (0.01 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> exit
Bye

root@controller-0:~# service keystone restart
keystone stop/waiting
keystone start/running, process 21313
root@controller-0:~# keystone-manage db_sync

Once the initial DB has been populated, we want to copy the SSL certificates from the first keystone node to the other two.  Copy the entire contents of /etc/keystone/ssl to the other two nodes, and make sure the directories and their files are chowned to keystone:keystone.

We can then restart the keystone service on the 2nd and 3rd nodes with “service keystone” restart and we should have our keystone nodes listening on the custom ports and ready for HAProxy configuration.  Because this API is accessible from both the public and management interfaces, we’ll need to have HAProxy listen on multiple networks this time:

/etc/haproxy/haproxy.cfg – (Note: We’re adding this to the bottom of the file)


listen keystone_admin_private 10.1.1.10:35357
balance source
option tcpka
option httpchk
maxconn 10000
server controller-0 10.1.1.20:45357 check inter 2000 rise 2 fall 5
server controller-1 10.1.1.21:45357 check inter 2000 rise 2 fall 5
server controller-2 10.1.1.22:45357 check inter 2000 rise 2 fall 5
listen keystone_api_private 10.1.1.10:5000
balance source
option tcpka
option httpchk
maxconn 10000
server controller-0 10.1.1.20:15000 check inter 2000 rise 2 fall 5
server controller-1 10.1.1.21:15000 check inter 2000 rise 2 fall 5
server controller-2 10.1.1.22:15000 check inter 2000 rise 2 fall 5
listen keystone_admin_public 192.168.243.10:35357
balance source
option tcpka
option httpchk
maxconn 10000
server controller-0 192.168.243.11:45357 check inter 2000 rise 2 fall 5
server controller-1 192.168.243.12:45357 check inter 2000 rise 2 fall 5
server controller-2 192.168.243.13:45357 check inter 2000 rise 2 fall 5
listen keystone_api_public 192.168.243.10:5000
balance source
option tcpka
option httpchk
maxconn 10000
server controller-0 192.168.243.11:15000 check inter 2000 rise 2 fall 5
server controller-1 192.168.243.12:15000 check inter 2000 rise 2 fall 5
server controller-2 192.168.243.13:15000 check inter 2000 rise 2 fall 5

We then reload haproxy on all 3 nodes with “service haproxy reload” and then we can check the haproxy statistics page to determine whether the new keystone services are up and detected by the load balancer:

The last step for keystone is creating the users, services and endpoints that tie everything together.  There are numerous keystone deployment scripts available online, so we picked one and modified it for our uses.  One thing of note is that we need to differentiate between the public and admin URLs, and the internal URLs which run on our management network.

We’ve left the object and networking (quantum/neutron) services out for now, as we’ll be addressing those in a later article.  Since we know we’re going to be using Glance and Cinder as the image and volume services, we created those now.

A copy of our keystone deployment script can be found here:  keystone_deploy.sh

We also need to add keystone credentials to the servers we’ll be issuing keystone and other OpenStack commands from.  We’ll place this file on all three controllers for now:

~/.openstack_credentials


export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.243.10:35357/v2.0

We’ll load that into our environment now and on next login with the following commands:

source ~/.openstack_credentials
echo “. ~/.openstack_credentials” >> ~/.profile

Now we can confirm that our keystone users, services and endpoints and in place and ready to go:


root@controller-0:~# keystone user-list
+———————————-+——–+———+——————-+
|                id                |  name  | enabled |       email       |
+———————————-+——–+———+——————-+
| c8c5f82c2368445398ef75bd209dded1 | admin  |   True  |  admin@domain.com |
| 9b6461349428440b9008cc17bdf9aaf5 | cinder |   True  | cinder@domain.com |
| e6793ca5c3c94918be70010b58653428 | glance |   True  | glance@domain.com |
| d2c0dbfba9ae405d8f803df878afb505 |  nova  |   True  |  nova@domain.com  |
| a0dd7577399a49008a1e5aa35be56065 |  test  |   True  |  test@domain.com  |
+———————————-+——–+———+——————-+
root@controller-0:~# keystone service-list
+———————————-+———-+———-+—————————+
|                id                |   name   |   type   |        description        |
+———————————-+———-+———-+—————————+
| 42a44c9b7b374302af2d2b998376665e |  cinder  |  volume  |  OpenStack Volume Service |
| 163f251efd474459aaf6edb0e766e53d |   ec2    |   ec2    |   OpenStack EC2 service   |
| d734fbd95ec04ade9b680010511d716a |  glance  |  image   |  OpenStack Image Service  |
| c9d70e0f77ed42b1a8b96c51eadb6d20 | keystone | identity |     OpenStack Identity    |
| 8cf8f2b113054a7cb29203e3c31a3ef4 |   nova   | compute  | OpenStack Compute Service |
+———————————-+———-+———-+—————————+
root@controller-0:~# keystone endpoint-list
+———————————-+———–+———————————————+—————————————-+———————————————+———————————-+
|                id                |   region  |                  publicurl                  |              internalurl               |                   adminurl                  |            service_id            |
+———————————-+———–+———————————————+—————————————-+———————————————+———————————-+
| 127fa3f046c142c5a83122c68ac9ae79 | regionOne |          http://192.168.243.10:9292         |         http://10.1.1.10:9292          |          http://192.168.243.10:9292         | d734fbd95ec04ade9b680010511d716a |
| 23c84da682614d4db00a8fccba5550b7 | regionOne | http://192.168.243.10:8774/v2/$(tenant_id)s | http://10.1.1.10:8774/v2/$(tenant_id)s | http://192.168.243.10:8774/v2/$(tenant_id)s | 8cf8f2b113054a7cb29203e3c31a3ef4 |
| 29ce6f0c712b499d9537e861d40846d5 | regionOne |  http://192.168.243.10:8773/services/Cloud  |  http://10.1.1.10:8773/services/Cloud  |  http://192.168.243.10:8773/services/Admin  | 163f251efd474459aaf6edb0e766e53d |
| a4a8e4d6fb9548b4b59ef335581c907b | regionOne |       http://192.168.243.10:5000/v2.0       |       http://10.1.1.10:5000/v2.0       |       http://192.168.243.10:35357/v2.0      | c9d70e0f77ed42b1a8b96c51eadb6d20 |
| f7e663f609a440a9985e30efc1a2c7cf | regionOne | http://192.168.243.10:8776/v1/$(tenant_id)s | http://10.1.1.10:8776/v1/$(tenant_id)s | http://192.168.243.10:8776/v1/$(tenant_id)s | 42a44c9b7b374302af2d2b998376665e |
+———————————-+———–+———————————————+—————————————-+———————————————+———————————-+

With keystone up and running, we’ll take a little detour to talk about storage in the next article.