openstack-centos-kvm-glusterfs
文件大小: unknow
源码售价: 5 个金币 积分规则     积分充值
资源说明:A Step-by-Step Guide to Deploying OpenStack on CentOS Using the KVM Hypervisor and GlusterFS Distributed File System
A Step-by-Step Guide to Deploying OpenStack on CentOS Using the KVM Hypervisor and GlusterFS Distributed File System
====================================================================================================================

This guide is also available in the following formats:

-  `PDF `_
-  `EPUB `_
-  `HTML `_

Cloud computing has proven to be a successful distributed computing
model as demostrated by its wide spread industrial adoption. Apart from
public Clouds, such as Amazon EC2, private and hybrid Cloud deployments
are important for organizations of all scales. The availability of free
open source Cloud platforms is essential to further drive the
proliferation of private and hybrid Cloud computing environments.
OpenStack is free open source Cloud computing software originally
released by Rackspace and NASA, which strives to close the gap in the
lack of a comprehensive Cloud platform with a fast pace of development
and innovation, and supported by both an active community of people and
large companies. In this work, we go through and discuss the steps
required to come from bare hardware to a fully operational multi-node
OpenStack installation. Every step discussed in this paper is
implemented as a separate shell script making it easy to understand the
intricate details of the installation process. The full set of
installation scripts is reseased under the Apache 2.0 License and is
publicly available online.

% Deploying OpenStack on CentOS Using the KVM Hypervisor and GlusterFS
Distributed File System % Anton Beloglazov; Sareh Fotuhi Piraghaj;
Mohammed Alrokayan; Rajkumar Buyya % 14th of August 2012

Introduction
============

The Cloud computing model leverages virtualization to deliver computing
resources to users on-demand on a pay-per-use basis [1], [2]. It
provides the properties of self-service and elasticity enabling users to
dynamically and flexibly adjust their resource consumption according to
the current workload. These properties of the Cloud computing model
allow one to avoid high upfront investments in a computing
infrastructure, thus reducing the time to market and facilitating a
higher pace of innovation.

Cloud computing resources are delivered to users through three major
service models:

-  *Infrastructure as a Service (IaaS)*: computing resources are
   delivered in the form of Virtual Machines (VMs). A VM provides to the
   user a view of a dedicated server. The user is capable of managing
   the system within a VM and deploying the required software. Examples
   of IaaS are Amazon EC2 [1]_ and Google Compute Engine [2]_.
-  *Platform as a Service (PaaS)*: the access to the resources is
   provided in the form of an Application Programming Interface (API)
   that is used for application development and deployment. In this
   model, the user does not have a direct access to the system
   resources, rather the resource allocation to applications is
   automatically managed by the platform. Examples of PaaS are Google
   App Engine [3]_ and Microsoft Azure [4]_.
-  *Software as a Service (SaaS)*: application-level software services
   are provided to the users on a subscription basis over the Internet.
   Examples of SaaS are Salesforce.com [5]_ and applications from the
   Amazon Web Services Marketplace [6]_.

In this work, we focus on the low level service model – IaaS. Apart from
the service models, Cloud computing services are distinguished according
to their deployment models. There are three basic deployment models:

-  *Public Cloud*: computing resources are provided publicly over the
   Internet based on a pay-per-use model.
-  *Private Cloud*: the Cloud infrastructure is owned by an
   organization, and hosted and operated internally.
-  *Hybrid Cloud*: computing resources are provided by a composition of
   a private and public Clouds.

Public Clouds, such as Amazon EC2, have initiated and driven the
industrial adoption of the Cloud computing model. However, the software
platforms utilized by public Cloud providers are usually proprietary
disallowing their deployment on-premise. In other words, due to
closed-source software, it is not possible to deploy the same software
platform used, for example, by Amazon EC2 on a private computing
infrastructure. Fortunately, there exist several open source Cloud
platforms striving to address the issue, such as OpenStack, Eucalyptus,
OpenNebula, and CloudStack. The mentioned projects basically allow
anyone to not only deploy a private Cloud environment free of charge,
but also contribute back to the development of the platform.

The aim of this work is to facilitate further development and adoption
of open source Cloud computing software by providing a step-by-step
guide to installing OpenStack on multiple compute nodes of a real-world
testbed using a set of shell scripts. The difference from the existing
tools for automated installation of OpenStack is that the purpose of
this work is not only obtaining a fully operational OpenStack Cloud
environment, but also learning the steps required to perform the
installation from the ground up and understanding the responsibilities
and interaction of the OpenStack components. This is achieved by
splitting the installation process into multiple logical steps, and
implementing each step as a separate shell script. In this paper, we go
through and discuss each of the complete sequence of steps required to
install OpenStack on top of CentOS 6.3 using the Kernel-based Virtual
Machine (KVM) as a hypervisor and GlusterFS as a distributed replicated
file system to enable live migration and provide fault tolerance. The
source code described in this paper is released under the Apache 2.0
License and is publicly available online [7]_.

In summary, this paper discusses and guides through the installation
process of the following software:

-  CentOS [8]_: a free Linux Operating System (OS) distribution derived
   from the Red Hat Enterprise Linux (RHEL) distribution.
-  GlusterFS [9]_: a distributed file system providing shared replicated
   storage across multiple servers over Ethernet or Infiniband. Having a
   storage system shared between the compute nodes is a requirement for
   enabling live migration of VM instances. However, having a
   centralized shared storage service, such as NAS limits the
   scalability and leads to a single point of failure. In contrast, the
   advantages of a distributed file system solution, such as GlusterFS,
   are: (1) no single point of failure, which means even if a server
   fails, the storage and data will remain available due to automatic
   replication over multiple servers; (2) higher scalability, as
   Input/Output (I/O) operations are distributed across multiple
   servers; and (3) due to the data replication over multiple servers,
   if a data replica is available on the host, VM instances access the
   data locally rather than remotely over network improving the I/O
   performance.
-  KVM [10]_: a hypervisor providing full virtualization for Linux
   leveraging hardware-assisted virtualization support of the Intel VT
   and AMD-V chipsets. The kernel component of KVM is included in the
   Linux kernel since the 2.6.20 version.
-  OpenStack [11]_: free open source IaaS Cloud computing software
   originally released by Rackspace and NASA under the Apache 2.0
   License in July 2010. The OpenStack project is currently lead and
   managed by the OpenStack Foundation, which is “an independent body
   providing shared resources to help achieve the OpenStack Mission by
   Protecting, Empowering, and Promoting OpenStack software and the
   community around it, including users, developers and the entire
   ecosystem”. [12]_

In the next section we give an overview of the OpenStack software, its
features, main components, and their interaction. In Section 3, we
briefly compare 4 open source Cloud computing platforms, namely
OpenStack, Eucalyptus, CloudStack, and OpenNebula. In Section 4, we
discuss the existing tools for automated installation of OpenStack and
the differences from our approach. In Section 5 we provide a detailed
description and discussion of the steps required to install OpenStack on
top of CentOS using KVM and GlusterFS. In Section 6, we conclude the
paper with a summary and discussion of future directions.

Overview of the OpenStack Cloud Platform
========================================

.. figure:: /beloglazov/openstack-centos-kvm-glusterfs/raw/master/doc/src/openstack-software-diagram.png
   :align: center
   :alt: A high level view of the OpenStack service interaction [3]

   A high level view of the OpenStack service interaction [3]
OpenStack is a free open source IaaS Cloud platform originally released
by Rackspace and NASA under the Apache 2.0 License in July 2010.
OpenStack controls and manages compute, storage, and network resource
aggregated from multiple servers in a data center. The system provides a
web interface (dashboard) and APIs compatible with Amazon EC2 to the
administrators and users that allow flexible on-demand provisioning of
resources. OpenStack also supports the Open Cloud Computing Interface
(OCCI) [13]_, which is an emerging standard defining IaaS APIs, and
delivered through the Open Grid Forum (OGF) [14]_.

In April 2012, the project lead and management functions have been
transferred to a newly formed OpenStack Foundation. The goals of the
foundation are to support an open development process and community
building, drive awareness and adoption, and encourage and maintain an
ecosystem of companies powered by the OpenStack software. The OpenStack
project is currently supported by more than 150 companies including AMD,
Intel, Canonical, SUSE Linux, Red Hat, Cisco, Dell, HP, IBM and Yahoo!.

The OpenStack software is divided into several services shown in Figure
1 that through their interaction provide the overall system management
capabilities. The main services include the following:

-  *OpenStack Compute (Nova)*: manages the life cycle of VM instances
   from scheduling and resource provisioning to live migration and
   security rules. By leveraging the virtualization API provided by
   Libvirt [15]_, OpenStack Compute supports multiple hypervisors, such
   as KVM and Xen.
-  *OpenStack Storage*: provides block and object storage to use by VM
   instances. The block storage system allows the uses to create block
   storage devices and dynamically attach and detach them from VM
   instances using the dashboard or API. In addition to block storage,
   OpenStack provides a scalable distributed object storage called
   Swift, which is also accessible through an API.
-  *OpenStack Networking*: provides API-driven network and IP address
   management capabilities. The system allows the users to create their
   own networks and assign static, floating, or dynamic IP addresses to
   VM instances.
-  *OpenStack Dashboard (Horizon)*: provides a web interface for the
   administrators and users to the system management capabilities, such
   as VM image management, VM instance life cycle management, and
   storage management.
-  *OpenStack Identity (Keystone)*: a centralized user account
   management service acting as an authentication and access control
   system. In addition, the service provides the access to a registry of
   the OpenStack services deployed in the data center and their
   communication endpoints.
-  *OpenStack Image (Glance)*: provides various VM image management
   capabilities, such as registration, delivery, and snapshotting. The
   service supports multiple VM image formats including Raw, AMI, VHD,
   VDI, qcow2, VMDK, and OVF.

The OpenStack software is architectured with an aim of providing
decoupling between the services constituting the system. The services
interact with each other through the public APIs they provide and using
Keystone as a registry for obtaining the information about the
communication endpoints. The OpenStack Compute service, also referred to
as Nova, is built on a shared-nothing messaging-based architecture,
which allows running the services on multiple servers. The services,
which compose Nova communicate via the Advanced Message Queue Protocol
(AMQP) using asynchronous calls to avoid blocking. More detailed
information on installation and administration of OpenStack in given in
the official manuals [4], [5]. In the next section we compare OpenStack
with the other major open source Cloud platforms.

Comparison of Open Source Cloud Platforms
=========================================

In this section, we briefly discuss and compare OpenStack with three
other major open source Cloud platforms, namely Eucalyptus, OpenNebula,
and CloudStack.

Eucalyptus [16]_ is an open source IaaS Cloud platform developed by
Eucalyptus Systems and released in March 2008 under the GPL v3 license.
Eucalyptus is an acronym for “Elastic Utility Computing Architecture for
Linking Your Programs To Useful Systems”. Prior to version 3.1,
Eucalyptus had two editions: open source, and enterprise, which included
extra features and commercial support. As of version 3.1, both the
editions have been merged into a single open source project. In March
2012, Eucalyptus and Amazon Web Services (AWS) announced a partnership
aimed at bringing and maintaining additional API compatibility between
the Eucalyptus platform and AWS, which will enable simpler workload
migration and deployment of hybrid Cloud environments [17]_. The
Eucalyptus platform is composed of the following 5 high-level
components, each of which is implemented as a standalone web service:

-  *Cloud Controller*: manages the underlying virtualized resources
   (servers, network, and storage) and provides a web interface and API
   compatible with Amazon EC2.
-  *Cluster Controller*: controls VMs running on multiple physical nodes
   and manages the virtual networking between VMs, and between VMs and
   external users.
-  *Walrus*: implements object storage accessible through an API
   compatible with Amazon S3.
-  *Storage Controller*: provides block storage that can be dynamically
   attached to VMs, which is managed via an API compatible with Amazon
   Elastic Block Storage (EBS).
-  *Node Controller*: controls the life cycle of VMs within a physical
   node using the functionality provided by the hypervisor.

OpenNebula [18]_ is an open source IaaS Cloud platform originally
established as a research project back in 2005 by Ignacio M. Llorente
and Rubén S. Montero. The software was first publicly released in March
2008 under the Apache 2.0 license. In March 2010, the authors of
OpenNebula founded C12G Labs, an organization aiming to provide
commercial support and services for the OpenNebula software. Currently,
the OpenNebula project is managed by C12G Labs. OpenNebula supports
several standard APIs, such as EC2 Query, OGF OCCI, and vCLoud.
OpenNebula provides the following features and components:

-  *Users and Groups*: OpenNebula supports multiple user accounts and
   groups, various authentication and authorization mechanisms, as well
   as Access Control Lists (ACL) allowing fine grained permission
   management.
-  *Virtualization Subsystem*: communicates with the hypervisor
   installed on a physical host enabling the management and monitoring
   of the life cycle of VMs.
-  *Network Subsystem*: manages virtual networking provided to
   interconnect VMs, supports VLANs and Open vSwitch.
-  *Storage Subsystem*: supports several types of data stores for
   storing VM images.
-  *Clusters*: are pools of hosts that share data stores and virtual
   networks, they can be used for load balancing, high availability, and
   high performance computing.

CloudStack [19]_ is an open source IaaS Cloud platform originally
developed by Cloud.com. In May 2010, most of the software was released
under the GPL v3 license, while 5% of the code were kept proprietary. In
July 2011, Citrix purchased Cloud.com and in August 2011 released the
remaining code of CloudStack under the GPL v3 license. In April 2012,
Citrix donated CloudStack to the Apache Software Foundation, while
changing the license to Apache 2.0. CloudStack implements the Amazon EC2
and S3 APIs, as well as the vCloud API, in addition to its own API.
CloudStack has a hierarchical structure, which enables management of
multiple physical hosts from a single interface. The structure includes
the following components:

-  *Availability Zones*: represent geographical locations, which are
   used in the allocation of VM instances in data storage. An
   Availability Zone consists of at least one Pod, and Secondary
   Storage, which is shared by all Pods in the Zone.
-  *Pods*: are collections of hardware configured to form Clusters. A
   Pod can contain one or more Clusters, and a Layer 2 switch
   architecture, which is shared by all Clusters in that Pod.
-  *Clusters*: are groups of identical physical hosts running the same
   hypervisor. A Cluster has a dedicated Primary Storage device, where
   the VM instances are hosted.
-  *Primary Storage*: is unique to each Cluster and is used to host VM
   instances.
-  *Secondary Storage*: is used to store VM images and snapshots.

A comparison of the discussed Cloud platforms is summarized in Table 1.

+----------------+--------------+-------------+-------------+-------------+
|                | OpenStack    | Eucalyptus  | OpenNebula  | CloudStack  |
+================+==============+=============+=============+=============+
| Managed By     | OpenStack    | Eucalyptus  | C12G Labs   | Apache      |
|                | Foundation   | Systems     |             | Software    |
|                |              |             |             | Foundation  |
+----------------+--------------+-------------+-------------+-------------+
| License        | Apache 2.0   | GPL v3      | Apache 2.0  | Apache 2.0  |
+----------------+--------------+-------------+-------------+-------------+
| Initial        | October 2010 | May 2010    | March 2008  | May 2010    |
| Release        |              |             |             |             |
+----------------+--------------+-------------+-------------+-------------+
| OCCI           | Yes          | No          | Yes         | No          |
| Compatibility  |              |             |             |             |
+----------------+--------------+-------------+-------------+-------------+
| AWS            | Yes          | Yes         | Yes         | Yes         |
| Compatibility  |              |             |             |             |
+----------------+--------------+-------------+-------------+-------------+
| Hypervisors    | Xen, KVM,    | Xen, KVM,   | Xen, KVM,   | Xen, KVM,   |
|                | VMware       | VMware      | VMware      | VMware,     |
|                |              |             |             | Oracle VM   |
+----------------+--------------+-------------+-------------+-------------+
| Programming    | Python       | Java, C     | C, C++,     | Java        |
| Language       |              |             | Ruby, Java  |             |
+----------------+--------------+-------------+-------------+-------------+

Table: Comparison of OpenStack, Eucalyptus, OpenNebula, and CloudStack

Existing OpenStack Installation Tools
=====================================

There are several official OpenStack installation and administration
guides [5]. These are invaluable sources of information about OpenStack;
however, the official guides mainly focus on the configuration in
Ubuntu, while the documentation for other Linux distributions, such as
CentOS, is incomplete or missing. In this work, we aim to close to gap
by providing a step-by-step guide to installing OpenStack on CentOS.
Another difference of the current guide from the official documentation
is that rather then describing a general installation procedure, we
focus on concrete and tested steps required to obtain an operational
OpenStack installation for our testbed. In other words, this guide can
be considered to be an example of how OpenStack can be deployed on a
real-world multi-node testbed.

One of the existing tools for automated installation of OpenStack is
DevStack [20]_. DevStack is distributed in the form of a single shell
script, which installs a complete OpenStack development environment. The
officially supported Linux distributions are Ubuntu 12.04 (Precise) and
Fedora 16. DevStack also comes with guides to installing OpenStack in a
VM, and on real hardware. The guides to installing OpenStack on hardware
include both single node and multi-node installations. One of the
drawbacks of the approach taken by DevStack is that in case of an error
during the installation process, it is required to start installation
from the beginning instead of just fixing the current step.

Another tool for automated installation of OpenStack is
dodai-deploy [21]_, which is described in the OpenStack Compute
Administration Manual [4]. dodai-deploy is a Puppet [22]_ service
running on all the nodes and providing a web interface for automated
installation of OpenStack. The service is developed and maintained to be
run on Ubuntu. Several steps are required to install and configure the
dodai-deploy service on the nodes. Once the service is started on the
head and compute nodes, it is possible to install and configure
OpenStack using the provided web interface or REST API.

The difference of our approach from both DevStack and dodai-deploy is
that instead of adding an abstraction layer and minimizing the number of
steps required to be followed by the user to obtain an operational
OpenStack installation, we aim to explicitly describe and perform every
installation step in the form of a separate shell script. This allows
the user to proceed slowly and customize individual steps when
necessary. The purpose of our approach is not just obtaining an up and
running OpenStack installation, but also learning the steps required to
perform the installation from the ground up and understanding the
responsibilities and interaction of the OpenStack components. Our
installation scripts have been developed and tested on CentOS, which is
a widely used server Linux distribution. Another difference of our
approach from both DevStack and dodai-deploy is that we also set up
GlusterFS to provide a distributed shared storage, which enables fault
tolerance and efficient live migration of VM instances.

Red Hat, a platinum member of the OpenStack Foundation, has announced
its commercial offering of OpenStack starting from the Folsom release
with the availability in 2013 [23]_. From the announcement it appears
that the product will be delivered through the official repositories for
Red Hat Enterprise Linux 6.3 or higher, and will contain Red Hat’s
proprietary code providing integration with other Red Hat products, such
as Red Hat Enterprise Virtualization for managing virtualized data
centers and Red Hat Enterprise Linux. This announcement is a solid step
to the direction of adoption of OpenStack in enterprises requiring
commercial services and support.

Step-by-Step OpenStack Deployment
=================================

As mentioned earlier, the aim of this work is to detail the steps
required to perform a complete installation of OpenStack on multiple
nodes. We split the installation process into multiple subsequent
logical steps and provide a shell script for each of the steps. In this
section, we explain and discuss every step needed to be followed to
obtain a fully operational OpenStack installation on our testbed
consisting of 1 controller and 4 compute nodes. The source code of the
shell scripts described in this paper is released under the Apache 2.0
License and is publicly available online [24]_.

Hardware Setup
--------------

The testbed used for testing the installation scripts consists of the
following hardware:

-  1 x Dell Optiplex 745

   -  Intel(R) Core(TM) 2 CPU (2 cores, 2 threads) 6600 @ 2.40GHz
   -  2GB DDR2-667
   -  Seagate Barracuda 80GB, 7200 RPM SATA II (ST3808110AS)
   -  Broadcom 5751 NetXtreme Gigabit Controller

-  4 x IBM System x3200 M3

   -  Intel(R) Xeon(R) CPU (4 cores, 8 threads), X3460 @ 2.80GHz
   -  4GB DDR3-1333
   -  Western Digital 250 GB, 7200 RPM SATA II (WD2502ABYS-23B7A)
   -  Dual Gigabit Ethernet (2 x Intel 82574L Ethernet Controller)

-  1 x Netgear ProSafe 16-Port 10/100 Desktop Switch FS116

The Dell Optiplex 745 machine has been chosen to serve as a management
host running all the major OpenStack services. The management host is
referred to as the *controller* further in the text. The 4 IBM System
x3200 M3 servers are used as *compute hosts*, i.e. for hosting VM
instances.

Due to the specifics of our setup, the only one machine connected to the
public network and the Internet is one of the IBM System x3200 M3
servers. This server is refereed to as the *gateway*. The gateway is
connected to the public network via the ``eth0`` network interface.

All the machines form a local network connected via the Netgear FS116
network switch. The compute hosts are connected to the local network
through their ``eth1`` network interfaces. The controller is connected
to the local network through its ``eth0`` interface. To provide the
access to the public network and the Internet, the gateway performs
Network Address Translation (NAT) for the hosts from the local network.

Organization of the Installation Package
----------------------------------------

The project contains a number of directories, whose organization is
explained in this section. The ``config`` directory includes
configuration files, which are used by the installation scripts and
should be modified prior to the installation. The ``lib`` directory
contains utility scripts that are shared by the other installation
scripts. The ``doc`` directory comprises the source and compiled
versions of the documentation.

The remaining directories directly include the step-by-step installation
scripts. The names of these directories have a specific format. The
prefix (before the first dash) is the number denoting the order of
execution. For example, the scripts from the directory with the prefix
*01* must be executed first, followed by the scripts from the directory
with the prefix *02*, etc. The middle part of a directory name denotes
the purpose of the scripts in this directory. The suffix (after the last
dash) specifies the host, on which the scripts from this directory
should be executed. There are 4 possible values of the target host
prefix:

-  *all* – execute the scripts on all the hosts;
-  *compute* – execute the scripts on all the compute hosts;
-  *controller* – execute the scripts on the controller;
-  *gateway* – execute the scripts on the gateway.

For example, the first directory is named ``01-network-gateway``, which
means that (1) the scripts from this directory must be executed in the
first place; (2) the scripts are supposed to do a network set up; and
(3) the scripts must be executed only on the gateway. The name
``02-glusterfs-all`` means: (1) the scripts from this directory must be
executed after the scripts from ``01-network-gateway``; (2) the scripts
set up GlusterFS; and (3) the scripts must be executed on all the hosts.

The names of the installation scripts themselves follow a similar
convention. The prefix denotes the order, in which the scripts should be
run, while the remaining part of the name describes the purpose of the
script.

Configuration Files
-------------------

The ``lib`` directory contains configuration files used by the
installation scripts. These configuration files should be modified prior
to running the installation scripts. The configuration files are
described below.

``configrc:``
    This file contains a number of environmental variables defining
    various aspects of OpenStack’s configuration, such as administration
    and service account credentials, as well as access points. The file
    must be “sourced” to export the variables into the current shell
    session. The file can be sourced directly by running:
    ``. configrc``, or using the scripts described later. A simple test
    to check whether the variables have been correctly exported is to
    ``echo`` any of the variables. For example, ``echo $OS_USERNAME``
    must output ``admin`` for the default configuration.

``hosts:``
    This file contains a mapping between the IP addresses of the hosts
    in the local network and their host names. We apply the following
    host name convention: the compute hosts are named *computeX*, where
    *X* is replaced by the number of the host. According the described
    hardware setup, the default configuration defines 4 compute hosts:
    ``compute1`` (192.168.0.1), ``compute2`` (192.168.0.2), ``compute3``
    (192.168.0.3), ``compute4`` (192.168.0.4); and 1 ``controller``
    (192.168.0.5). As mentioned above, in our setup one of the compute
    hosts is connected to the public network and acts as a gateway. We
    assign to this host the host name ``compute1``, and also alias it as
    ``gateway``.

``ntp.conf:``
    This file contains a list of Network Time Protocol (NTP) servers to
    use by all the hosts. It is important to set accessible servers,
    since time synchronization is important for OpenStack services to
    interact correctly. By default, this file defines servers used
    within the University of Melbourne. It is advised to replace the
    default configuration with a list of preferred servers.

It is important to replaced the default configuration defined in the
described configuration files, since the default configuration is
tailored to the specific setup of our testbed.

Installation Procedure
----------------------

CentOS
~~~~~~

The installation scripts have been tested with CentOS 6.3, which has
been installed on all the hosts. The CentOS installation mainly follows
the standard process described in detail in the Red Hat Enterprise Linux
6 Installation Guide [6]. The minimal configuration option is
sufficient, since all the required packages can be installed later when
needed. The steps of the installation process that differ from the
default are discussed in this section.

Network Configuration.
^^^^^^^^^^^^^^^^^^^^^^

The simplest way to configure network is during the OS installation
process. As mentioned above, in our setup, the gateway is connected to
two networks: to the public network through the ``eth0`` interface; and
to the local network through the ``eth1`` interface. Since in our setup
the public network configuration can be obtained from a DHCP server, in
the configuration of the ``eth0`` interface it is only required to
enable the automatic connection by enabling the “Connect Automatically”
option. We use static configuration for the local network; therefore,
``eth1`` has be configured manually. Apart from enabling the “Connect
Automatically” option, it is necessary to configure IPv4 by adding an IP
address and netmask. According to the configuration defined in the
``hosts`` file described above, we assign 192.168.0.1/24 to the gateway.

One of the differences in the network configuration of the other compute
hosts (``compute2``, ``compute3``, and ``compute4``) from the gateway is
that ``eth0`` should be kept disabled, as it is unused. The ``eth1``
interface should be enabled by turning on the “Connect Automatically”
option. The IP address and netmask for ``eth1`` should be set to
192.168.0.\ *X*/24, where *X* is replaced by the compute host number.
The gateway for the compute hosts should be set to 192.168.0.1, which
the IP address of the gateway. The controller is configured similarly to
the compute hosts with the only difference that the configuration should
be done for ``eth0`` instead of ``eth1``, since the controller has only
one network interface.

Hard Drive Partitioning.
^^^^^^^^^^^^^^^^^^^^^^^^

The hard drive partitioning scheme is the same for all the compute
hosts, but differs for the controller. Table 2 shows the partitioning
scheme for the compute hosts. ``vg_base`` is a volume group comprising
the standard OS partitions: ``lv_root``, ``lv_home`` and ``lv_swap``.
``vg_gluster`` is a special volume group containing a single
``lv_gluster`` partition, which is dedicated to serve as a GlusterFS
brick. The ``lv_gluster`` logical volume is formatted using the
XFS [25]_ file system, as recommended for GlusterFS bricks.

+------------------------+-------------+-----------------------+------------+
| Device                 | Size (MB)   | Mount Point / Volume  | Type       |
+========================+=============+=======================+============+
| *LVM Volume Groups*    |             |                       |            |
+------------------------+-------------+-----------------------+------------+
|   vg\_base             | 20996       |                       |            |
+------------------------+-------------+-----------------------+------------+
|     lv\_root           | 10000       | /                     | ext4       |
+------------------------+-------------+-----------------------+------------+
|     lv\_swap           | 6000        |                       | swap       |
+------------------------+-------------+-----------------------+------------+
|     lv\_home           | 4996        | /home                 | ext4       |
+------------------------+-------------+-----------------------+------------+
|   vg\_gluster          | 216972      |                       |            |
+------------------------+-------------+-----------------------+------------+
|     lv\_gluster        | 216972      | /export/gluster       | xfs        |
+------------------------+-------------+-----------------------+------------+
| *Hard Drives*          |             |                       |            |
+------------------------+-------------+-----------------------+------------+
|   sda                  |             |                       |            |
+------------------------+-------------+-----------------------+------------+
|     sda1               | 500         | /boot                 | ext4       |
+------------------------+-------------+-----------------------+------------+
|     sda2               | 21000       | vg\_base              | PV (LVM)   |
+------------------------+-------------+-----------------------+------------+
|     sda3               | 216974      | vg\_gluster           | PV (LVM)   |
+------------------------+-------------+-----------------------+------------+

Table: The partitioning scheme for the compute hosts

Table 3 shows the partitioning scheme for the controller. It does not
include a ``vg_gluster`` volume group since the controller is not going
to be a part of the GlusterFS volume. However, the scheme includes two
new important volume groups: ``nova-volumes`` and ``vg_images``. The
``nova-volumes`` volume group is used by OpenStack Nova to allocated
volumes for VM instances. This volume group is managed by Nova;
therefore, there is not need to create logical volumes manually. The
``vg_images`` volume group and its ``lv_images`` logical volume are
devoted for storing VM images by OpenStack Glance. The mount point for
``lv_images`` is ``/var/lib/glance/images``, which is the default
directory used by Glance to store VM image files.

+----------------------+-------------+------------------------+------------+
| Device               | Size (MB)   | Mount Point / Volume   | Type       |
+======================+=============+========================+============+
| *LVM Volume Groups*  |             |                        |            |
+----------------------+-------------+------------------------+------------+
|   nova-volumes       | 29996       |                        |            |
+----------------------+-------------+------------------------+------------+
|     Free             | 29996       |                        |            |
+----------------------+-------------+------------------------+------------+
|   vg\_base           | 16996       |                        |            |
+----------------------+-------------+------------------------+------------+
|     lv\_root         | 10000       | /                      | ext4       |
+----------------------+-------------+------------------------+------------+
|     lv\_swap         | 2000        |                        | swap       |
+----------------------+-------------+------------------------+------------+
|     lv\_home         | 4996        | /home                  | ext4       |
+----------------------+-------------+------------------------+------------+
|   vg\_images         | 28788       |                        |            |
+----------------------+-------------+------------------------+------------+
|     lv\_images       | 28788       | /var/lib/glance/images | ext4       |
+----------------------+-------------+------------------------+------------+
| *Hard Drives*        |             |                        |            |
+----------------------+-------------+------------------------+------------+
|   sda                |             |                        |            |
+----------------------+-------------+------------------------+------------+
|     sda1             | 500         | /boot                  | ext4       |
+----------------------+-------------+------------------------+------------+
|     sda2             | 17000       | vg\_base               | PV (LVM)   |
+----------------------+-------------+------------------------+------------+
|     sda3             | 30000       | nova-volumes           | PV (LVM)   |
+----------------------+-------------+------------------------+------------+
|     sda4             | 28792       |                        | Extended   |
+----------------------+-------------+------------------------+------------+
|       sda5           | 28788       | vg\_images             | PV (LVM)   |
+----------------------+-------------+------------------------+------------+

Table: The partitioning scheme for the controller

Network Gateway
~~~~~~~~~~~~~~~

Once CentOS is installed on all the machines, the next step is to
configure NAT on the gateway to enable the Internet access on all the
hosts. First, it is necessary to check whether the Internet is available
on the gateway itself. If the Internet is not available, the problem
might be in the configuration of ``eth0``, the network interface
connected to the public network in our setup.

In all the following steps, it is assumed that the user logged in is
``root``. If the Internet is available on the gateway, it is necessary
to install the Git [26]_ version control client to be able to clone the
repository containing the installation scripts. This can be done using
``yum``, the default package manager in CentOS, as follows:

::

    yum install -y git

Next, the repository can be cloned using the following command:

::

    git clone \
       https://github.com/beloglazov/openstack-centos-kvm-glusterfs.git

Now, we can continue the installation using the scripts contained in the
cloned Git repository. As described above, the starting point is the
directory called ``01-network-gateway``.

::

    cd openstack-centos-kvm-glusterfs/01-network-gateway

All the scripts described below can be run by executing
``./



				
			
		
本源码包内暂不包含可直接显示的源代码文件,请下载源码包。