Openstack Kolla

List of Hosts:

10.240.169.3: “MAAS, docker, openstack-kolla, kolla” all installed on here

Local Docker Registry

To install multi-node docker openstack, we need to have local registry service, Nexus3 is a GUI visible easy to use registry server.
install it via docker,
create ./nexus3/data/docker-compose.yml

=====================================
nexus:
image: sonatype/nexus3:latest
ports:
– “8081:8081”
– “5000:5000”
volumes:
– ./data:/nexus-data
=======================================

and then “docker-compose up -d” to create docker container. May need to pip install docker-compose.

Launch web browser to 10.240.169.3(docker host):8081, default account:admin/admin123, then create a new repo type hosted/docker, use port 5000 and enable docker v1.

verify on docker hosts they can login this private registry: docker login -p admin123 -u admin 10.240.169.3:5000

To pull images from internet repo to local registry

on 10.240.169.3

pip install kolla
kolla-build –base ubuntu –type source –registry 10.240.169.3:5000 –push

This will pull all available docker images from internet, and stored at local.

Prepare hosts for ceph osd

part disk label on each host:
parted /dev/sdb -s — mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
parted /dev/sdc -s — mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
parted /dev/sdd -s — mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
parted /dev/sde -s — mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
parted /dev/sdf -s — mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
parted /dev/sdg -s — mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
parted /dev/sdh -s — mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_1 1 -1
parted /dev/sdi -s — mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_2 1 -1
parted /dev/sdj -s — mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_1_J 1 -1
parted /dev/sdk -s — mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_2_J 1 -1

each host needs to install following:

apt install python-pip -y
pip install -U docker-py

apt-get install bridge-utils debootstrap ifenslave ifenslave-2.6 lsof lvm2 ntp ntpdate openssh-server sudo tcpdump python-dev vlan -y

no need to install docker.io manually, as there’s a bootstrap cmd doing this job under kolla-ansible: kolla-ansible -i multinode bootstrap-servers

if any deployment failure, copy /usr/local/share/kolla-ansible/tools/cleanup-containers to each host and run it to clean up containers and redo deploy again.

“kolla-ansible -i multinode destroy” can remove all deployed containor on all nodes, but ceph partitions will be kept. so to erase partitioned disks, run following on each host:

umount /dev/sdb1
umount /dev/sdc1
umount /dev/sdd1
umount /dev/sde1
umount /dev/sdf1
umount /dev/sdg1
umount /dev/sdh1
umount /dev/sdi1
dd if=/dev/zero of=/dev/sdb bs=512 count=1
dd if=/dev/zero of=/dev/sdc bs=512 count=1
dd if=/dev/zero of=/dev/sdd bs=512 count=1
dd if=/dev/zero of=/dev/sde bs=512 count=1
dd if=/dev/zero of=/dev/sdf bs=512 count=1
dd if=/dev/zero of=/dev/sdg bs=512 count=1
dd if=/dev/zero of=/dev/sdh bs=512 count=1
dd if=/dev/zero of=/dev/sdi bs=512 count=1
dd if=/dev/zero of=/dev/sdj bs=512 count=1
dd if=/dev/zero of=/dev/sdk bs=512 count=1

Swift

Here’s a guide to calculate what number should be used for “swift-ring-builder create “. The first step is to determine the number of partitions that will be in the ring. We recommend that there be a minimum of 100 partitions per drive to insure even distribution across the drives. A good starting point might be to figure out the maximum number of drives the cluster will contain, and then multiply by 100, and then round up to the nearest power of two.

For example, imagine we are building a cluster that will have no more than 5,000 drives. That would mean that we would have a total number of 500,000 partitions, which is pretty close to 2^19, rounded up.

It is also a good idea to keep the number of partitions small (relatively). The more partitions there are, the more work that has to be done by the replicators and other backend jobs and the more memory the rings consume in process. The goal is to find a good balance between small rings and maximum cluster size.

The next step is to determine the number of replicas to store of the data. Currently it is recommended to use 3 (as this is the only value that has been tested). The higher the number, the more storage that is used but the less likely you are to lose data.

It is also important to determine how many zones the cluster should have. It is recommended to start with a minimum of 5 zones. You can start with fewer, but our testing has shown that having at least five zones is optimal when failures occur. We also recommend trying to configure the zones at as high a level as possible to create as much isolation as possible. Some example things to take into consideration can include physical location, power availability, and network connectivity. For example, in a small cluster you might decide to split the zones up by cabinet, with each cabinet having its own power and network connectivity. The zone concept is very abstract, so feel free to use it in whatever way best isolates your data from failure. Each zone exists in a region.

A region is also an abstract concept that may be used to distinguish between geographically separated areas as well as can be used within same datacenter. Regions and zones are referenced by a positive integer.

Run following script on any random host first to create swift templates for kolla to use.

########################################################################################

export KOLLA_INTERNAL_ADDRESS=10.240.101.10 #don’t really need for multinodes
export KOLLA_SWIFT_BASE_IMAGE=”10.240.100.4:5000/kolla/ubuntu-source-swift-base:4.0.1″

mkdir -p /etc/kolla/config/swift

# Object ring
docker run \
–rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/object.builder create 8 3 1

for i in {1..8}; do
docker run \
–rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/object.builder add r1z1-10.240.103.1${i}:6000/d0 1;
done

# Account ring
docker run \
–rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/account.builder create 8 3 1

for i in {1..8}; do
docker run \
–rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/account.builder add r1z1-10.240.103.1${i}:6001/d0 1;
done

# Container ring
docker run \
–rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/container.builder create 8 3 1

for i in {1..8}; do
docker run \
–rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/container.builder add r1z1-10.240.103.1${i}:6002/d0 1;
done

for ring in object account container; do
docker run \
–rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/${ring}.builder rebalance;
done

#######################################################

Then copy all what’s been generated by this script into kolla-deployer host’s /etc/kolla/config/swift.

Neutron

By default, kolla uses flat network and only enable vlan provider network when ironic is enabled. so you’ll see this in the ml2_config.ini

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = qos,port_security,dns

[ml2_type_vlan]
network_vlan_ranges =

[ml2_type_flat]
flat_networks = physnet1

[ml2_type_vxlan]
vni_ranges = 1:1000
vxlan_group = 239.1.1.1

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[linux_bridge]
physical_interface_mappings = physnet1:br_vlan

[vxlan]
l2_population = true
local_ip = 10.240.102.14

Move physnet1 from flat to network_vlan_ranges will enable vlan provider feature.

Ironic

Ironic is Bare Metal Service on openstack. It needs few parts to be installed before deployed by kolla.

  1. apt-get install qemu-ultis
  2. sudo pip install -U “diskimage-builder>=1.1.2”
  3. disk-image-create ironic-agent ubuntu -o ironic-agent (this cannot be done under lxc)
  4. copy generated ironic-agent.kernel and ironic-agent.initramfs to kolla-ansible host /etc/kolla/config/ironic

enable ovs, and then kolla-ansible deploy.

When deploying ironic, iscsid will be required and it may have error “iscsid container: mkdir /sys/kernel/config: operation not permitted”, the fix is to run “modprobe configfs” on each host.

iscsid may fail to start, remove open-iscsi on all hosts will fix this.

Magnum

pip install python-magnumclient, version 2.6.0.

source user who need to use magnum need to have role in heat

make sure have following value in magnum.conf, otherwise barbican will complain for not being able to create certs.

[certificates]
cert_manager_type = barbican
cert_manager_type = x509keypair

Current COE and their supported distro, it has to match this table, otherwise it will complain vm type not supported.

COE distro
Kubernetes Fedora Atomic
Kubernetes CoreOS
Swarm Fedora Atomic
Mesos Ubuntu

Example to create docker swarm cluster:

wget https://fedorapeople.org/groups/magnum/fedora-atomic-newton.qcow2
openstack image create \
–disk-format=qcow2 \
–container-format=bare \
–file=fedora-atomic-newton.qcow2 \
–property os_distro=’fedora-atomic’ \
fedora-atomic-newton
magnum cluster-template-create swarm-cluster-template \
–image fedora-atomic-newton \
–keypair mykey \
–external-network public \
–dns-nameserver 8.8.8.8 \
–master-flavor m1.small \
–flavor m1.small \
–coe swarm
magnum cluster-create swarm-cluster \
–cluster-template swarm-cluster-template \
–master-count 1 \
–node-count 1

Collectd Influxdb and Grafana

these combination can be a really nice tool for monitoring openstack activities.

few things need to changed from default kolla deployment config:

collectd:

FQDNLookup false
LoadPlugin network
LoadPlugin syslog
LoadPlugin cpu
LoadPlugin interface
LoadPlugin load
LoadPlugin memory
Server “10.240.101.11” “25826”

influxdb:

[[collectd]]
enabled = true
bind-address = “10.240.101.11:25826”
database = “collectd”
typesdb = “/usr/share/collectd/types.db”

Be caution, it needs [[]] for collectd on influxdb. And also this types.db won’t be created automatically! Even though you can see on influxdb some udp traffic received from collectd, but it’s not stored in types.db until you manually copy it from collectd host/folder. This is critical!!!

Then Grafana is much simpler. You just need to add influxdb as datasource, and make up graphics in the dashboard. if you want to show interface traffic in bit/s, just use derivative and if_octets.

Rally with Tempest testing benchmark

Create tempest verifier, this will automatically download from github repo.

rally verify create-verifier –type tempest –name tempest-verifier

set this tempest verifier for current deployment with modified part in options.conf

rally verify configure-verifier –extend extra_options.conf

cat options.conf

[compute]
image_ref = acc51ecc-ee27-4b3a-ae2a-f0b1c1196918
image_ref_alt = acc51ecc-ee27-4b3a-ae2a-f0b1c1196918
flavor_ref = 7a8394f1-056b-41b3-b422-b5195d5a379f
flavor_ref_alt = 7a8394f1-056b-41b3-b422-b5195d5a379f
fixed_network_name = External

then just run test, “rally verify start –pattern set=compute” for specific parts of openstack.

Mount cdrom along with disk drive

Some time we’d like to have cdrom mounted with bootable disk to install OS instead of boot from images. In such a case, we need to tell openstack volume a will be a bootable cdrom, volome b will be secondary and be kept as vdb disk. After OS installed, we can then kill whole VM and recreate it again with volume b only and assign it as bootable vda, then it will be working as regular vm.

Here’s how to create VM with cdrom

nova boot –flavor m1.small –nic net-id=e3fa6e8f-5ae9-4da6-84ba-e52d85a272bb –block-device id=e513a39b-36a1-49df-a528-0ccdb0f8515b,source=volume,dest=volume,bus=ide,device=/dev/vdb,type=cdrom,bootindex=1 –block-device source=volume,id=87ae535a-984d-4ceb-87e9-e48fa109c81a,dest=volume,device=/dev/vda,bootindex=0 –key-name fuel fuel

Create PXE boot image

Openstack doesn’t support instance PXE boot. To make it work, we need to create our own PXE bootable image.

Here’s how(only works on non-container):

https://kimizhang.wordpress.com/2013/08/26/create-pxe-boot-image-for-openstack/

1.Create a small empty disk file, create dos filesystem.
dd if=/dev/zero of=pxeboot.img bs=1M count=4
fdisk pxeboot.img(create partition and flag it bootable)
mkdosfs pxeboot.img
2.Make it bootable by syslinux
losetup /dev/loop0 pxeboot.img
mount /dev/loop0 /mnt
syslinux –install /dev/loop0
3.Install iPXE kernel and make sysliux.cfg to load it at bootup
wget http://boot.ipxe.org/ipxe.iso
mount -o loop ipxe.iso /media
cp /media/ipxe.krn /mnt
cat > /mnt/syslinux.cfg <<EOF
DEFAULT ipxe
LABEL ipxe
(2 space here)KERNEL ipxe.krn
EOF
umount /media/
umount /mnt

And then we need to figure out how to bypass neutron’s anti-spoofing. there are 2 ways to do it, either create flat network so it will not use neutron, or use dhcp opt to redirect dhcp/pxe traffic. In order to avoid future mass, I’d use vxlan network and keep using neutron’s default config.
1. Create new vm using pxeboot.img on PXE subnet with fixed IP, 192.168.2.4.
2. “neutron port-list” find out where the port is, and “neutron port-update” to change dhcp opt to redirect traffic from neutron’s dhcp to PXE server’s dhcp.
“neutron port-update 9dd25815-753b-4138-99ed-e2ba30048c3e –extra-dhcp-opt opt_value=pxelinux.0,opt_name=bootfile-name

neutron port-update f9a416cd-02b0-4397-b0cc-cac6fc2556e9 –extra-dhcp-opt opt_value=192.168.2.11,opt_name=tftp-server

neutron port-update f9a416cd-02b0-4397-b0cc-cac6fc2556e9 –extra-dhcp-opt opt_value=192.168.2.11,opt_name=server-ip-address”
3. you may also need to change PXE dhcp lease record to mark new vm be assigned with fixed ip 192.168.2.4, because we didn’t turn off antispoofing, if vm gets assigned with different IP than what neutron dhcp would like to give, it will drop all traffic from this new vm.
to add static mapping for dnsmasq, modify /etc/dnsmasq.d/default.conf ” dhcp-host=AB:CD:EF:11:22:33,192.168.1.10,24h”

add whole subnet for a port to bypass antispoofing
neutron port-update b7d1d8bd-6ca7-4c35-9855-ba0dc2573fdc –allowed_address_pairs list=true type=dict ip_address=10.101.11.0/24

Enable Root Access

Kolla image disable root login by default. To enable it, we need to manually add sudoer inside container.

add following inside json under /etc/kolla/config/ceph/, take ceph as an example(file can’t have . in its name, otherwise system won’t read it):
{
“source”: “{{ container_config_directory }}/cephsudo”,
“dest”: “/etc/sudoers.d/cephsudo”,
“owner”: “root”,
“perm”: “0600”
}
and then create ceph.sudo under same folder:
ceph ALL=(ALL) NOPASSWD: ALL

Enable usb hot plug for kolla nova kvm

Normally we enable usb hot plug by “virsh attach-device”, and if anything we need to change like controller settings for usb2.0 we use “virsh edit”, but in openstack, nova is monitoring and controlling the whole process of running a kvm, which means it will remove anything added after kvm created by it self, so we need to find a way to bypass its detection.

1.install lsusb to list all usb devices seen on nova compute.

host# lsusb
Bus 002 Device 004: ID 0781:5530 SanDisk Corp. Cruzer
Bus 002 Device 005: ID 0781:5530 SanDisk Corp. Cruzer

2. edit usb.xml to prepare for adding hot plug usb, and “virsh dumpxml instance-000000xx” to dump and save existing instance target.

<hostdev mode=’subsystem’ type=’usb’ managed=’yes’>
<source>
<vendor id=’0x0781’/>
<product id=’0x5530’/>
<address bus=’2′ device=’4’/>
</source>
<address type=’usb’ bus=’1′ port=’2’/>
</hostdev>
 

address bus = lsusb info
address type bus = controller index info

3.start instance in openstack and then “virsh destroy instance-000000xx” from nova compute.

4.”virsh undefine instance-000000xx” to remove it from database, and edit and add usb2.0 controller within dumpxml file, then “virsh define instance-000000xx” to recreated it.

<controller type=’usb’ index=’1′ model=’ehci’>
</controller>

5.finally “virsh start instance-000000xx” to boot it. now it should have the new usb2.0 controller mounted and won’t be removed by nova.

“virsh attach-device instance-000000xx usb.xml” to add usb hot plug device.

Advertisements

Opencart memo

To fix country loading error under fastor theme 1.4, under catalog/view/theme/fastor/template/account change register.tpl last part “account/account/country&country_id=” to “localisation/country&country_id=”.

To enable opencart mail service, need to install ssmtp on server first, otherwise port 465 and smtp module won’t work.

import .pem cert into windows

certutil –addstore –f “Root” *path*

generate haproxy cert/key file.

openssl req -x509 -newkey rsa:4096 -keyout cert.crt -out cert.crt -days 365 -nodes

haproxy redirect http to https

docker run -d -e FORCE_SSL=yes -e CERT_FOLDER='/cert/' --name webapp dockercloud/hello-world
docker run -d --link webapp:webapp -p 443:443 dockercloud/haproxy

Openstack Ansible

Same External/Internal IP

when deploy via ansible, if external and internal VIP are using same IP, SSL feature needs to be disabled, otherwise it will cause pip install failure:

____________________________________________________________

————————————————————

FAILED – RETRYING: TASK: pip_install : Install pip packages (fall back mode) (2 retries left).
FAILED – RETRYING: TASK: pip_install : Install pip packages (fall back mode) (1 retries left).
fatal: [infra01_galera_container-ff9ac443]: FAILED! => {“attempts”: 5, “changed”: false, “cmd”: “/usr/local/bin/pip2 install -U –isolated –constraint http://10.240.169.102:8181/os-releases/master/requirements_absolute_requirements.txt “, “failed”: true, “msg”: “\n:stderr: Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by ‘ProtocolError(‘Connection aborted.’, BadStatusLine(\””\”,))’: /os-releases/master/requirements_absolute_requirements.txt\nRetrying (Retry(total=3, connect=None, read=None, redirect=None)) after connection broken by ‘ProtocolError(‘Connection aborted.’, BadStatusLine(\””\”,))’: /os-releases/master/requirements_absolute_requirements.txt\nRetrying (Retry(total=2, ………. Max retries exceeded with url: /os-releases/master/requirements_absolute_requirements.txt (Caused by ProtocolError(‘Connection aborted.’, BadStatusLine(\””\”,)))\n”}

resolution: add /etc/openstack_deploy/user_variables.yml

openstack_service_publicuri_proto: http
openstack_external_ssl: false
haproxy_ssl: false

Openstack-ansible playbook

to make openstack ansible build openstack, need to generate random password for components first then run playbook
/opt/openstack-ansible/scripts
python pw-token-gen.py –file /etc/openstack_deploy/user_secrets.yml

after fully deploying openstack ansible, lxc-attach onto utility container, find /root/openrc file for openstack environment vars.

when reboot or all galera servers down and couldn’t be started, showing as mysql service failure, run “openstack-ansible galera-install.yml –tags galera-bootstrap” to recover it.

Openstack ceph-ansible

to make Openstack use cinder-ceph, we need to manually install ceph from git first.

git clone https://github.com/ceph/ceph-ansible/
cd ceph-ansible/
cp site.yml.sample site.yml
cp group_vars/all.yml.sample group_vars/all.yml
cp group_vars/mons.yml.sample group_vars/mons.yml
cp group_vars/osds.yml.sample group_vars/osds.yml

edit hosts file
[root@ansible ~]# vi inventory_hosts
[mons]
10.240.173.102
10.240.173.103
10.240.173.104

[osds]
10.240.173.102
10.240.173.103
10.240.173.104
10.240.173.105
10.240.173.106

[rgws]
10.240.173.102

verify they are reachable via ssh
ansible -m ping -i hosts all

edit site.yml unmark anything not needed
edit group_vars/all.yml
ceph_origin: upstream
ceph_stable: true
ceph_stable_release:jewel
monitor_interface: br-storage
journal_size: 1024
public_network: 10.240.173.0/24

edit group_vars/osd.yml to indicate which disks are used for osd and journal.
then run the installation file: ansible-playbook site.yml -i hosts

make sure ceph health is OK, if it’s stuck at inactive stat, check mtu on eth ports.

if ceph -s complains “too few PGs per OSD”, then change osd pool num. for 10-50 osd, use 1024 pg
# ceph osd pool set rbd pg_num 1024
# ceph osd pool set rbd pgp_num 1024

After installation, generate keyring on all ceph-mon nodes, otherwise openstack-ansible will complain for missing keyring:
ceph auth get-or-create client.cinder
ceph auth get-or-create client.glance
ceph auth get-or-create client.cinder-backup

on mon, you can check keyring with “ceph auth list”

also need to add permission for each client: ceph auth caps client.cinder mon ‘allow *’ osd ‘allow *’
otherwise it will cause cinder-volume failure.

another option is to uncomment these settings inside ceph-ansible/groupvars/mons.yml

Access rbd image from ceph mon

To directly access ceph disk which is also the actual mounted vm/image/volume disk, use “rbd map” to map image to mon’s system, and mount to a folder to access. Use “rbd -p poolname ls” to show ceph images inside a pool, and use “rbd -p poolname info imagename” to see details.

on ubuntu 16.04, with ceph jewelle, some new features are enabled but not supported on ubuntu, need to disable these feature on a per volume mapping base “rbd feature disable imagename deep-flatten fast-diff object-map exclusive-lock”, and then “rbd map pool/image” will work. A /dev/rbdx will be generated and if it’s a right image it will have sub partition which can be mounted.

Nova access cinder-volume

To make nova able to attach or mount cinder volume, a rbd_secret_uuid need to be added on both cinder.conf and nova.conf. otherwise it will complain “notype” error.

Horizon Issue

To fix URL option missing under image tab on openstack dashboard, add this line inside /etc/horizon/local_settings.py on all 3 controller’s horizon container.
IMAGES_ALLOW_LOCATION = ‘true’
then restart apache2
or add this inside /etc/ansible/roles/os_horizon/templates/horizon_local_settings.py.j2, as by default, newer version of openstack has omit it for security concern.

To make original location visiable under glance image-list, change /etc/glance/glance-api.conf inside each glance container,
#display URL address
show_image_direct_url = True
#display available multiple locations
show_multiple_locations = True
then restart glance-api service

“Cannot read property ‘data’ of undefined” ” while creating new images can be multiple causes, check available stores defined for your input field, if it’s enabled with “File” on horizon, check “HORIZON_IMAGES_UPLOAD_MODE” under horizon local_settings.py; if it’s URL enabled, check your glance store setting. If use URL link, then all instance when they first boot will use this url to download image.

Glance Issue

add extra image location and path to an exiting image:
to authenticate with API calls we need a token, so

$ keystone token-get
+———–+———————————-+
| Property | Value |
+———–+———————————-+
| expires | 2015-05-06T14:22:16Z |
| id | 2602709084d64417b7f3480fccfa1785 |
| tenant_id | 486ab7509bfd46c386d4a8353b80a08d |
| user_id | 0b78d6793b1c4305ad6e76fa232b5a74 |
+———–+———————————-+

and then reuse this token to make API call

$ curl -i -X PATCH -H ‘Content-Type: application/openstack-images-v2.1-json-patch’ \
-H “X-Auth-Token: 2602709084d64417b7f3480fccfa1785” \
http://192.168.0.60:9292/v2/images/90674766-dbaa-4a6e-a344-2a4116af9fab \
-d ‘[{“op”: “add”, “path”: “/locations/-“, “value”: {“url”: “rbd://5de961fb-2368-4f77-8725-7b002732e214/images/7bb0484c-cb6b-4700-88bb-0a18b8f3a8f5/snap”, “metadata”: {}}}]’

HTTP/1.1 200 OK
Content-Length: 955
Content-Type: application/json; charset=UTF-8
X-Openstack-Request-Id: req-req-29faba33-657e-4959-b508-fcffe8081d8f
Date: Wed, 06 May 2015 14:21:21 GMT

{“status”: “active”, “virtual_size”: null, “name”: “CirrOS-0.3.3”, “tags”: [], “container_format”: “bare”, “created_at”: “2015-05-06T09:29:40Z”, “size”: 13200896, “disk_format”: “qcow2”, “updated_at”: “2015-05-06T14:21:20Z”, “visibility”: “private”, “locations”: [{“url”: “rbd://5de961fb-2368-4f77-8725-7b002732e214/images/90674766-dbaa-4a6e-a344-2a4116af9fab/snap”, “metadata”: {}}, {“url”: “rbd://5de961fb-2368-4f77-8725-7b002732e214/images/7bb0484c-cb6b-4700-88bb-0a18b8f3a8f5/snap”, “metadata”: {}}], “self”: “/v2/images/90674766-dbaa-4a6e-a344-2a4116af9fab”, “min_disk”: 0, “protected”: false, “id”: “90674766-dbaa-4a6e-a344-2a4116af9fab”, “file”: “/v2/images/90674766-dbaa-4a6e-a344-2a4116af9fab/file”, “checksum”: “133eae9fb1c98f45894a4e60d8736619”, “owner”: “486ab7509bfd46c386d4a8353b80a08d”, “direct_url”: “rbd://5de961fb-2368-4f77-8725-7b002732e214/images/90674766-dbaa-4a6e-a344

Ceilometer issue

Ceilometer only works with mongodb, openstack-ansible doesn’t have mongodb role, so we need to manually install it.

apt-get install mongodb-server mongodb-clients python-pymongo

add smallfiles = true in /etc/mongodb.conf, restart the service and add ceilometer user

mongo –host 127.0.0.1 –eval ‘db = db.getSiblingDB(“ceilometer”); db.addUser({user: “ceilometer”, pwd: “CEILOMETER_DBPASS”, roles: [ “readWrite”, “dbAdmin” ]})’

then add following in user_variable.yml

ceilometer_db_type: mongodb
ceilometer_db_ip: localhost
ceilometer_db_port: 27017

this way, we make each ceilometer to use its own local mongo database

Nova boot process illustration

nova-boot1.PNG

nova-boot2.PNG

Rabbitmq

To have a general view of what’s going on with AMQP traffic, we need to access rabbitmq GUI.

enable the mgmt GUI plugin

rabbitmq-plugins enable rabbitmq_management
rabbitmqctl add_user test test
rabbitmqctl set_user_tags test administrator
rabbitmqctl set_permissions -p / test ".*" ".*" ".*"

SRIOV config

1.#change /etc/default/grub:

GRUB_CMDLINE_LINUX_DEFAULT=”nomdmonddf nomdmonisw intel_iommu=on
update-grub

#add vif
echo ‘7’ > /sys/class/net/eth6/device/sriov_numvfs

2.#change compute /etc/nova/nova.conf to enable vif passthrough
[default]
pci_passthrough_whitelist = { “devname”: “eth6”, “physical_network”: “sriov”}
service nova-compute restart

3.#change neutron server nodes to support sriov
/etc/neutron/plugins/ml2/ml2_conf.ini
mechanism_drivers = sriovnicswitch

#(optional)add /etc/neutron/plugins/ml2/ml2_conf_sriov.ini
supported_pci_vendor_devs = 8086:10ed
service neutron-server restart

4.#add on each nova-scheduler node
[DEFAULT]
scheduler_default_filters = PciPassthroughFilter

service nova-scheduler restart

5.#each compute nodes
apt-get install neutron-plugin-sriov-agent
/etc/neutron/plugins/ml2/sriov_agent.ini
[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
[sriov_nic]
physical_device_mappings = sriov:eth6
exclude_devices =

#apply new ini to agent
neutron-sriov-nic-agent \
–config-file /etc/neutron/neutron.conf \
–config-file /etc/neutron/plugins/ml2/sriov_agent.ini

#change neutron.conf to enable tlsv1.2, as default tls1 is not supported by rabbitmq anymore
kombu_ssl_version = SSLv23

service neutron-sriov-agent restart

OVS traffic capture

OVS traffic flow: VM –> tap+”qbr(linuxbridge)”+qvb –> qvo+”br-int”+patch-br-ex –> patch-br-int+”br-ex”+port# –> external network

if no DVR used, then all traffic will go to neutron nodes from compute nodes then use neutron nodes’ port# to go out.

if DVR used, every host has a qrouter(same mac+IP), then when there’s no float IP for vm,  it can go out right from compute, don’t need to go to neutron; if there’s float IP, the float IP will reside on neutron node, so traffic need to go from vm to neutron first, then NATed and send to external, and when initiated from external, it will first hit neutron’s float IP, then filtered and NATed to vm.

Regular tcpdump can be done on Host’s port, but that only usable down to qvo. for patch-br-ex –> patch-br-int, you need to do following:

$ ip link add name snooper0 type dummy
$ ip link set dev snooper0 up

$ ovs-vsctl add-port br-int snooper0

$ ovs-vsctl — set Bridge br-int mirrors=@m  — –id=@snooper0 \
 get Port snooper0  — –id=@patch-tun get Port patch-tun \
 — –id=@m create Mirror name=mymirror select-dst-port=@patch-tun \
 select-src-port=@patch-tun output-port=@snooper0 select_all=1

You can then try to do the tcp dump :

$ tcpdump -i snooper0

To clear it:

$ ovs-vsctl clear Bridge br-int mirrors

$ ovs-vsctl del-port br-int snooper0

$ ip link delete dev snooper0