Mirantis named a Challenger in 2024 Gartner® Magic Quadrant™ for Container Management  |  Learn More

< BLOG HOME

How to deploy Airship in a Bottle: A quick and dirty guide

Nick Chase - October 07, 2019
image
Airship is designed to enable you to reliably deploy OpenStack on Kubernetes, but with both of those systems being fairly complicated to deploy, a system that combines them can be downright confusing. Fortunately the community has created Airship in a Bottle, a simple way to create a deployment so you can get an idea of what's going on.
In this guide, we will take a look at Airship in a Bottle and what it gives you.

Deploying Airship in a Bottle

Airship in a Bottle, or AIAB, is designed to evaluate your environment to determine most of the information that it needs, so you don't have to worry about thinks like determining IP addresses and network interfaces.  All you need to do is download the repository and run the script. Let's get started.
  1. Get suitable "hardware".  The instructions will tell you that you need a fresh Ubuntu 16.04 VM with a minimum 4vCPU/20GB RAM/32GB disk, but this is somewhat outdated. To avoid problems, you are actually going to need a fresh Ubuntu 16.04 VM with 8vCPU/32GB RAM. 

    Note that there's a reason that the instructions specify a VM: AIAB is designed to create a "disposable" environment; it makes changes to the entire system, so you will not be able to run the script reliably more than once. By creating a VM, you can simply dispose of it and start over if you want to try again.
  2. Log into the system and change to root:
    sudo -i
  3. Create the deployment directory and download the software from the Treasuremap repository:
    mkdir -p /root/deploy && cd "$_"
    git clone https://opendev.org/airship/treasuremap/
    This repo actually contains multiple options, but we're going to concentrate on AIAB for today.
  4. Run the creation script:
    cd /root/deploy/treasuremap/tools/deployment/aiab/
    ./airship-in-a-bottle.sh
  5. Answer the script's questions.  Unless you have a good reason, it's probably best to just accept the defaults.  (After all, that's what the script was designed for.)
    Welcome to Airship in a Bottle
    
    /--------------------\ |                      \ |        |---|       \---- |        | x |             \ |        |---|             | |          |         / |     \____|____/       /---- |                      / \--------------------/
    A prototype example of deploying the Airship suite on a single VM.
    This example will run through: - Setup - Genesis of Airship (Kubernetes) - Basic deployment of Openstack (including Nova, Neutron, and Horizon using Openstack Helm) - VM creation automation using Heat
    The expected runtime of this script is greater than 1 hour
    The minimum recommended size of the Ubuntu 16.04 VM is 4 vCPUs, 20GB of RAM with 32GB disk space. Let's collect some information about your VM to get started. Is your HOST IFACE ens4? (Y/n) Is your LOCAL IP 10.128.0.39? (Y/n) Y
  6. Make some coffee, play with your kids, go get some fresh air ... this is going to take an hour or so.
  7. Eventually, the script will return information on the deployed installation:
    ...
    OpenStack Horizon dashboard is available on this host at the following URL:
     
     http://10.128.0.47:31309
    Credentials:  Domain: default  Username: admin  Password: password123
    OpenStack CLI commands could be launched via `./openstack` script, e.g.:  # cd /root/deploy/treasuremap/..//treasuremap/tools/  # ./openstack stack list  ...   Airship itself does not have a dashboard.
    Other endpoints and credentials are listed in the following locations:   /root/deploy/treasuremap/..//treasuremap/site/aiab/secrets/passphrases/
    Exposed ports of services can be listed with the following command:  # kubectl get services --all-namespaces | grep -v ClusterIP  ...
    + your_next_steps + set +x
    ---------------------------------------------------------------
    Airship has completed deployment of OpenStack (OpenStack-Helm).
    Explore Airship Treasuremap repository and documentation available at the following URLs:
     https://opendev.org/airship/treasuremap/  https://airship-treasuremap.readthedocs.io/
    ---------------------------------------------------------------
    + clean + set +x
    To remove files generated during this script's execution, delete /root/deploy/treasuremap/../.
    This VM is disposable. Re-deployment in this same VM will lead to unpredictable results.
    Your values will vary, of course!
[[ NOTE:  At the time of this writing, the output also includes references to components that aren't actually included in AIAB, so if you try something not shown here and it doesn't work, it's not you. ]]
Now let's go ahead and explore what we've got.

Exploring Airship in a Bottle

The purpose of Airship is to make it possible for you to reliably deploy OpenStack on Kubernetes, and that's what we have at this point.  Let's take a look at all that, starting at the bottom of the stack.
If you look at the Kubernetes cluster deployed by AIAB, you will see several namespaces:
root@bottle3:~# kubectl get namespaces
NAME          STATUS AGE
default       Active 12h
kube-public   Active 12h
kube-system   Active 12h
nfs           Active 12h
openstack     Active 11h
ucp           Active 12h
The one we're most interested in at this point is, of course, openstack:
root@bottle:~# kubectl get pods --field-selector=status.phase=Running -n openstack
NAME                                                            READY STATUS RESTARTS AGE
airship-openstack-memcached-memcached-5bd8dbff55-7mzsf          1/1 Running 0 11h
airship-openstack-rabbitmq-rabbitmq-0                           1/1 Running 0 11h
airship-openstack-rabbitmq-rabbitmq-exporter-7f4c799869-xwnhp   1/1 Running 0 11h
glance-api-674b664684-8z4fm                                     1/1 Running 0 11h
heat-api-6959d699d6-hb67p                                       1/1 Running 0 11h
heat-cfn-599b4b96cd-s6kw7                                       1/1 Running 0 11h
heat-engine-69d5c7f947-5p6t4                                    1/1 Running 0 11h
horizon-549bfbf97d-w5jdm                                        1/1 Running 0 11h
ingress-648c85cb85-fcjss                                        1/1 Running 0 11h
ingress-error-pages-78665fc8df-td6lp                            1/1 Running 0 11h
keystone-api-bb85bf7-49v5w                                      1/1 Running 0 11h
libvirt-libvirt-default-wp6vn                                   1/1 Running 0 11h
mariadb-ingress-744454b88d-7nlg9                                1/1 Running 0 11h
mariadb-ingress-error-pages-67d44dc8-bjdvp                      1/1 Running 0 11h
mariadb-server-0                                                1/1 Running 0 11h
neutron-dhcp-agent-default-ljj69                                1/1 Running 0 11h
neutron-l3-agent-default-bzdzl                                  1/1 Running 0 11h
neutron-metadata-agent-default-rpj2d                            1/1 Running 0 11h
neutron-ovs-agent-default-qxcdc                                 1/1 Running 0 11h
neutron-server-86ffb5bdd-64cr9                                  1/1 Running 0 11h
nova-api-metadata-785bb8cfd7-md99b                              1/1 Running 1 11h
nova-api-osapi-54d5479bb9-s7f2z                                 1/1 Running 0 11h
nova-compute-default-s6wcg                                      1/1 Running 0 11h
nova-conductor-5dbd64b475-v7hqb                                 1/1 Running 0 11h
nova-consoleauth-75777467ff-fjqkj                               1/1 Running 0 11h
nova-novncproxy-5fd78b47d7-kgflg                                1/1 Running 0 11h
nova-placement-api-98859484c-glhcq                              1/1 Running 0 11h
nova-scheduler-6694f657cf-wfj67                                 1/1 Running 0 11h
openvswitch-db-b7vhf                                            1/1 Running 0 11h
openvswitch-vswitchd-672qp                                      1/1 Running 0 11h
prometheus-mysql-exporter-67cbd476bb-pnmjd                      1/1 Running 0 11h
There are a LOT of containers here -- certainly more than you'd want to deploy manually!  As you can see, the OpenStack components you'd expect to see, such as nova, neutron, keystone, horizon, and so on, are represented in their containerized form, which makes sense, as they've been deployed by OpenStack-Helm.
Other than that, though, it's just a normal OpenStack deployment.  For example, the output said that we could find our OpenStack Horizon dashboard at:
http://10.128.0.47:31309
That's the internal IP and a randomly chosen port, so we'd actually look for it (in this case, at least) at the external version of that port, 
http://35.202.211.150:31309
Note that if you don't have that output for some reason, you can always find the available ports by asking Kubernetes directly:
root@bottle:~# kubectl get services --all-namespaces | grep -v ClusterIP
NAMESPACE     NAME                 TYPE     CLUSTER-IP    EXTERNAL-IP   PORT(S)                 AGE
openstack     horizon-dashboard    NodePort 10.96.138.222 <none>        80:31309/TCP                 11h
ucp           drydock-api          NodePort 10.96.17.172  <none>        9000:30000/TCP                 12h
ucp           maas-region          NodePort 10.96.97.191  <none>        80:30001/TCP,31800:31800/TCP,53:30839/UDP,514:32004/TCP   12h
As noted in the output, we can log into Horizon with 
Credentials:
  Domain: default
  Username: admin
  Password: password123
From there, we can see that we not only have a functional OpenStack cluster, but that it's been partially populated.
screenshot of OpenStack UI dashboard with projects tab selected showing functional clusters partially populated
There's even a sample VM:
screenshot of OpenStack UI dashboard with instances tab selected showing a functional VM
We can also use the CLI directly from the command line without worrying about the individual pods.  To do that, grab the credentials by choosing the OpenStack RC v3 file from the admin pulldown menu:
screenshot of upper right corner of OpenStack UI dashboard with admin dropdown menu open showing OpenStack RC File v3 highlighted
From there, you can either run the file on the command line or copy and paste, whichever is more convenient for you.  The important thing is that you're setting the appropriate environment variables:
export OS_PROJECT_ID=1746a87ab3e8409a9be419cc1c8703d1
export OS_PROJECT_NAME="admin"
export OS_USER_DOMAIN_NAME="Default"
export OS_PROJECT_DOMAIN_ID="default"
export OS_USERNAME="admin"
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
export OS_INTERFACE=public
export OS_IDENTITY_API_VERSION=3
The script assumes you'll enter the Keystone password from the command line; as with the interface, the default is 
password123
Before you can run any CLI commands, though, you'll need the OpenStack client, which isn't installed by default:
apt install python-openstackclient
From there, you can execute OpenStack commands.  For example, you can see the VM we were looking at in Horizon:
root@bottle:~# openstack server list
Password: 
+--------------------------------------+-----------------------------------+--------+----------------------------------------------------------------+
| ID                                   | Name | Status | Networks                                                       |
+--------------------------------------+-----------------------------------+--------+----------------------------------------------------------------+
| 7af72d02-4609-40aa-8333-6cd3a044d973 | test-stack-01-server-rjfkpintf3pq | ACTIVE | test-stack-01-private_net-wh3x4n3k5zkr=10.11.11.9, 172.24.8.11 |
+--------------------------------------+-----------------------------------+--------+----------------------------------------------------------------+
You can also list the other services available:
root@bottle:~# openstack catalog list
Password: 
+-----------+----------------+-----------+
| Name      | Type           | Endpoints |
+-----------+----------------+-----------+
| heat      | orchestration  | RegionOne |
|           |                | RegionOne |
|           |                | RegionOne |
|           |                |           |
| nova      | compute        | RegionOne |
|           |                | RegionOne |
|           |                | RegionOne |
|           |                |           |
| neutron   | network        | RegionOne |
|           |                | RegionOne |
|           |                | RegionOne |
|           |                |           |
| heat-cfn  | cloudformation | RegionOne |
|           |                | RegionOne |
|           |                | RegionOne |
|           |                |           |
| keystone  | identity       | RegionOne |
|           |                | RegionOne |
|           |                | RegionOne |
|           |                |           |
| glance    | image          | RegionOne |
|           |                | RegionOne |
|           |                | RegionOne |
|           |                |           |
| placement | placement      | RegionOne |
|           |                | RegionOne |
|           |                | RegionOne |
|           |                |           |
+-----------+----------------+-----------+
So as you can see, we've got a fully-functioning OpenStack cluster running on Kubernetes.
Next time we'll look at the structure of the manifests that define Airship installations, or "sites".

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED