7 Pharos spec defines the OPNFV test environment (in which OPNFV platform can be deployed and tested).
9 - Provides a secure, scalable, standard and HA environment
10 - Supports full deployment lifecycle (this requires a bare metal environment)
11 - Supports functional and performance testing
12 - Provides common tooling and test scenarios (including test cases and workloads) available to the community
13 - Provides mechanisms and procedures for secure remote access to the test environment
15 Virtualized environments will be useful but do not provide a fully featured deployment/test capability.
17 The high level architecture may be summarized as follows:
19 .. image:: images/pharos-archi1.jpg
21 Constraints of a Pharos compliant OPNFV test-bed environment
22 -------------------------------------------------------------
24 - One CentOS 7 Jump Server on which the virtualized Openstack/OPNFV installer runs
25 - Desired installer - may be Fuel, Foreman, Juju, etc
26 - 2 - 5 compute / controller nodes (`BGS <https://wiki.opnfv.org/get_started/get_started_work_environment>`_ requires 5 nodes)
27 - Network topology allowing for LOM, Admin, Public, Private, and Storage Networks
34 - Target system state includes default software components, network configuration, storage requirements `https://wiki.opnfv.org/get_started/get_started_system_state <https://wiki.opnfv.org/get_started/get_started_system_state>`
37 Rls 1 specification is modeled from Arno
39 * First draft of environment for BGS https://wiki.opnfv.org/get_started/get_started_work_environment
40 * Fuel environment https://wiki.opnfv.org/get_started/networkingblueprint
41 * Foreman environment https://wiki.opnfv.org/get_started_experiment1#topology
50 * Intel Xeon E5-2600v2 Series (Ivy Bridge and newer, or similar)
52 Local Storage Configuration:
54 Below describes the minimum for the Pharos spec, which is designed to provide enough capacity for a reasonably functional environment. Additional and/or faster disks are nice to have and may produce a better result.
56 * Disks: 2 x 1TB + 1 x 100GB SSD
57 * The first 1TB HDD should be used for OS & additional software/tool installation
58 * The second 1TB HDD configured for CEPH object storage
59 * Finally, the 100GB SSD should be used as the CEPH journal
60 * Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage
61 * Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)
69 * Single power supply acceptable (redundant power not required/nice to have)
73 Jump Server Installation
77 * Installer (Foreman, Fuel, ...) in a VM
80 See `Jump Server Installation <https://wiki.opnfv.org/jump_server_installation_guide>`_ for detailed Jump Server installation details.
82 Test Tools - See `functest <http://artifacts.opnfv.org/functest/docs/functest.html>`_
84 Controller nodes - these are bare metal servers
86 Compute nodes - these bare metal servers
88 **Infrastructure naming conventions / recommendations**
90 The Pharos specificaiton provides recomendations for default logins and naming conventions
92 See `Infrastructure naming conventions <https://wiki.opnfv.org/pharos/pharos_naming>`_
100 - Remote access is required for …
102 1. Developers to access deploy/test environments (credentials to be issued per POD / user)
103 2. Connection of each environment to Jenkins master hosted by Linux Foundation for automated deployment and test
105 - VPN is optional and dependent on company security rules (out of Pharos scope)
106 - POD access rules / restrictions …
108 - Refer to individual test-bed as each company may have different access rules and procedures
110 - Basic requirement is for SSH sessions to be established (initially on jump server)
111 - Majority of packages installed on a system (tools or applications) will be pulled from an external storage solution so this type of things should be solved in a very general sense for the projects
118 Lights-out Management:
120 - Out-of-band management for power on/off/reset and bare-metal provisioning
121 - Access to server is through lights-out-management tool and/or a serial console
122 - Intel lights-out ⇒ RMM http://www.intel.com/content/www/us/en/server-management/intel-remote-management-module.html
123 - HP lights-out ⇒ ILO http://www8.hp.com/us/en/products/servers/ilo/index.html
124 - CISCO lights-out ⇒ UCS https://developer.cisco.com/site/ucs-dev-center/index.gsp
126 Linux Foundation - VPN service for accessing Lights-Out Management (LOM) infrastructure for the UCS-M hardware
128 - People with admin access to LF infrastructure:
132 3. daniel.smith@ericsson.com
134 5. fatih.degirmenci@ericsson.com
135 6. fbrockne@cisco.com
136 7. jonas.bjurel@ericsson.com
137 8. jose.lausuch@ericsson.com
138 9. joseph.gasparakis@intel.com
139 10. morgan.richomme@orange.com
140 11. pbandzi@cisco.com
141 12. phladky@cisco.com
142 13. stefan.k.berg@ericsson.com
143 14. szilard.cserey@ericsson.com
144 15. trozet@redhat.com
146 - The people who require VPN access must have a valid PGP key bearing a valid signature from one of these three people. When issuing OpenVPN credentials, LF will be sending TLS certificates and 2-factor authentication tokens, encrypted to each recipient's PGP key.
153 * 24 or 48 Port TOR Switch
154 * NICS - 1GE, 10GE - per server can be on-board or PCI-e
155 * Connectivity for each data/control network is through a separate NIC. This simplifies Switch Management however requires more NICs on the server and also more switch ports
156 * Lights-out network can share with Admin/Management
160 * Option 1: 4x1G Control, 2x40G Data, 48 Port Switch
162 * 1 x 1G for ILMI (Lights out Management )
163 * 1 x 1G for Admin/PXE boot
164 * 1 x 1G for control Plane connectivity
166 * 2 x 40G (or 10G) for data network (redundancy, NIC bonding, High bandwidth testing)
168 * Option II: 1x1G Control, 2x 40G (or 10G) Data, 24 Port Switch
170 * Connectivity to networks is through VLANs on the Control NIC. Data NIC used for VNF traffic and storage traffic segmented through VLANs
172 * Option III: 2x1G Control, 2x10G Data, 2x40G Storage, 24 Port Switch
174 * Data NIC used for VNF traffic, storage NIC used for control plane and Storage segmented through VLANs (separate host traffic from VNF)
176 * 1 x 1G for Admin/PXE boot
177 * 2 x 10G for control plane connectivity/Storage
178 * 2 x 40G (or 10G) for data network
183 - Needs specification
187 - Subnet, VLANs (want to standardize but may be constrained by existing lab setups or rules)
189 - Types of NW - lights-out, public, private, admin, storage
190 - May be special NW requirements for performance related projects
193 .. image:: images/bridge1.png
195 controller node bridge topology overview
198 .. image:: images/bridge2.png
200 compute node bridge topology overview
205 ** Network Diagram **
207 The Pharos architecture may be described as follow: Figure 1: Standard Deployment Environment
209 .. image:: images/opnfv-pharos-diagram-v01.jpg
211 Figure 1: Standard Deployment Environment
224 Sample Network Drawings
225 -----------------------
227 Files for documenting lab network layout. These were contributed as Visio VSDX format compressed as a ZIP file. Here is a sample of what the visio looks like.
229 Download the visio zip file here: `opnfv-example-lab-diagram.vsdx.zip <https://wiki.opnfv.org/_media/opnfv-example-lab-diagram.vsdx.zip>`
231 .. image:: images/opnfv-example-lab-diagram.png
233 FYI: `Here <http://www.opendaylight.org/community/community-labs>` is what the OpenDaylight lab wiki pages look like.