Posts for: #Openstack

Quantum in Too Much Detail

I originally posted this article on the RDO website.

The players

This document describes the architecture that results from a particular OpenStack configuration, specifically:

  • Quantum networking using GRE tunnels;
  • A dedicated network controller;
  • A single instance running on a compute host

Much of the document will be relevant to other configurations, but details will vary based on your choice of layer 2 connectivity, number of running instances, and so forth.

The examples in this document were generated on a system with Quantum networking but will generally match what you see under Neutron as well, if you replace quantum by neutron in names. The OVS flow rules under Neutron are somewhat more complex and I will cover those in another post.

[read more]

A random collection of OpenStack Tools

I’ve been working with OpenStack a lot recently, and I’ve ended up with a small collection of utilities that make my life easier. On the odd chance that they’ll make your life easier, too, I thought I’d hilight them here.

Crux

Crux is a tool for provisioning tenants, users, and roles in keystone. Instead of a sequence of keystone command, you can provision new tenants, users, and roles with a single comand.

[read more]

Why does the Neutron documentation recommend three interfaces?

The documentation for configuring Neutron recommends that a network controller has three physical interfaces:

Before you start, set up a machine to be a dedicated network node. Dedicated network nodes should have the following NICs: the management NIC (called MGMT_INTERFACE), the data NIC (called DATA_INTERFACE), and the external NIC (called EXTERNAL_INTERFACE).

People occasionally ask, “why three interfaces? What if I only have two?”, so I wanted to provide an extended answer that might help people understand what the interfaces are for and what trade-offs are involved in using fewer interfaces.

[read more]

Automatic configuration of Windows instances in OpenStack, part 1

This is the first of two articles in which I discuss my work in getting some Windows instances up and running in our OpenStack environment. This article is primarily about problems I encountered along the way.

Motivations

Like many organizations, we have a mix of Linux and Windows in our environment. Some folks in my group felt that it would be nice to let our Windows admins take advantage of OpenStack for prototyping and sandboxing in the same ways our Linux admins can use it.

[read more]

Chasing OpenStack idle connection timeouts

The original problem

I’ve recently spent some time working on an OpenStack deployment. I ran into a problem in which the compute service would frequently stop communicating with the AMQP message broker (qpidd).

In order to gather some data on the problem, I ran the following simple test:

  • Wait n minutes
  • Run nova boot ... to create an instance
  • Wait a minute and see if the new instance becomes ACTIVE
  • If it works, delete the instance, set n = 2n and repeat

This demonstrated that communication was failing after about an hour, which correlates rather nicely with the idle connection timeout on the firewall.

[read more]