virsh based Openstack hint


install ubuntu 16.04 LTS

create / partition with 1TB

512MB for EFI

50000MB for SWAP

leave about 1.8TB for PV future use. can’t just use up entire sdb for / with lvm, it will cause issue when resizing lv space later. partition table will be massed up.

 

sudo fdisk -l

show partitioned disks

fdisk /dev/sdb

partition unallocated space for new pv, say sdb4 with 1.8TB

 

sudo apt install lvm2

install lvm for volume management. then need to reboot to fix “unreachable issue”

 

sudo pvcreate /dev/sdb4

make entire /dev/sdb4 usable for pv

sudo vgcreate Openstack /dev/sdb4

create bg group Openstack on sdb4 1.8TB

sudo lvcreate –size 20GB -n maas Openstack

create volume called maas with 20GB on newly created vg Openstack

new lvs need to have disk format to be mount as a dirve, mkfs.ext4 /dev/Openstack/maas, but this is not required for virsh as it will format it itself.

*** resize2fs /dev/Openstack/maas, to use up all space of the newly extended lvm space(lvextend)***

Open virt-manager and create maas vm

URL maas repo:

http://us.archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/

after maas server bootup, install maas: apt-get install maas

setup region controller address, dhcp for zones. install libvirt-bin for qemu PXE, and delete the default generated network from virsh: virsh net-destroy default, virsh net-undefine default.

 

create a new vm in virt-manager called ctl1, using lv ctl1 which was previously created.

Open MAAS GUI, add nodes for ctl1, booting uses: qemu+ssh://chz8494@192.168.100.2/system, where 192.168.100.2 is the hypervisor address, and power id to be vm name on virt-manager which is ctl1.

repeat above steps for all components: juju, ctl, neutron, computes.

sudo virt-install –name juju –ram 2048 –vcpus=2 –disk /dev/mapper/Openstack-juju,bus=scsi  –network bridge=PXE@vlan2,model=virtio  –network bridge=br3,model=virtio  –network bridge=br4,model=virtio  –network bridge=br5,model=virtio  –network bridge=br6,model=virtio  –noautoconsole –vnc –pxe

 

install juju-2.0

sudo apt-get install juju

 

create clouds.yaml file to later be used by add-cloud

clouds.yaml

clouds:

maas:

type: maas

auth-types: [ oauth1 ]

regions:

home:

endpoint: http://192.168.100.10/MAAS/

 

add new maas cloud to juju usable clouds

juju add-cloud maas clouds.yaml

 

add password for ssh maas

juju add-credential maas

And now all config will be saved at ~/.local/share/juju

Install bostrap on machine with tag bootstrap and name it juju on cloud maas and controller juju.

juju bootstrap –constraints tags=bootstrap  juju maas

This will generate a model called “controller”(don’t confuse it with real controller) and bootstrap will be installed on it.

You can also create new models for new environments for new deployments. This is the new feature added in juju 2.0 to support multiple environments under same juju host.

juju add-model Openstack

And use “juju switch openstack” to switch model from controller to openstack, and now if you use “juju deploy” it will deploy yaml file to your current model only. So in this way, the old method to redo the whole deployment is not necessary anymore, as you can always just use “juju destroy-model openstack” to only erase model openstack and leave controller bootstrap untouched. You can still destroy the whole juju though, the previous cmd “juju destroy-environment” is now “juju kill-controller”

juju may have many different models and controllers. The MODEL was not existed in juju 1.25. Controller may contain multiple models. The previous “juju switch maas” switched between environments, it now switches between models. After juju bootstrap installed, you can see bootstrap as machine 0 under juju status.

Optional juju-gui can be installed on machine 0 juju bootstrap vm.

juju deploy juju-gui –to=0

juju gui –show-credentials

check admin password for juju-gui to login.

juju debug-log

to see current debug message across the deployment.

In juju 2.0, there’s no more “local” for bundle.yaml. Instead, it uses this way “charm: ./xenial/ntp”. And if you want to use cloud resources, use “charm: cs:ntp” .

Openstack yaml will install its components onto each controller’s LXD in xenial. Instead of caching lxc config into /var/lxc/default.conf, LXD uses “lxc profile” cmd to change port bindings.

lxc profile device set default eth0 parent juju-br0

set default profile to bind lxc container’s port eth0 to its host juju-br0

If your bridged host port set to static IP, then the newly created lxc container will boot up with eth0 manual. So if you want to make this dhcp, you need to change it in container.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s