createrapid.py is now using heat templates yaml 77/70177/2
authorLuc Provoost <luc.provoost@intel.com>
Mon, 11 May 2020 09:06:17 +0000 (05:06 -0400)
committerLuc Provoost <luc.provoost@intel.com>
Fri, 15 May 2020 09:55:09 +0000 (11:55 +0200)
yaml files have been added to the repo as an example. Please check the
README explaining the output section reqs for this yaml file. There is
also a new file (config_file): it also specifies which yaml files to
use. multiple dataplane interfaces per VM can now be specified and will
appear in the <STACK>.env file. An error in setting the packet size has
been fixed (see set_udp_packet_size for packet size setting details)

Change-Id: Ie89a4940521dac7dd3652acca477739abb9f5497
Signed-off-by: Luc Provoost <luc.provoost@intel.com>
17 files changed:
VNFs/DPPD-PROX/helper-scripts/rapid/README
VNFs/DPPD-PROX/helper-scripts/rapid/check_prox_system_setup.sh
VNFs/DPPD-PROX/helper-scripts/rapid/config_file [new file with mode: 0644]
VNFs/DPPD-PROX/helper-scripts/rapid/createrapid.py
VNFs/DPPD-PROX/helper-scripts/rapid/deploycentostools.sh
VNFs/DPPD-PROX/helper-scripts/rapid/irq.test
VNFs/DPPD-PROX/helper-scripts/rapid/openstack-rapid.yaml [new file with mode: 0644]
VNFs/DPPD-PROX/helper-scripts/rapid/params_rapid.yaml [new file with mode: 0644]
VNFs/DPPD-PROX/helper-scripts/rapid/prox_ctrl.py
VNFs/DPPD-PROX/helper-scripts/rapid/rapid-openstack-server.yaml [new file with mode: 0644]
VNFs/DPPD-PROX/helper-scripts/rapid/rapid_flowsizetest.py
VNFs/DPPD-PROX/helper-scripts/rapid/rapid_generator_machine.py
VNFs/DPPD-PROX/helper-scripts/rapid/rapid_log.py
VNFs/DPPD-PROX/helper-scripts/rapid/rapid_parser.py
VNFs/DPPD-PROX/helper-scripts/rapid/runrapid.py
VNFs/DPPD-PROX/helper-scripts/rapid/sharkproxlog.sh
VNFs/DPPD-PROX/helper-scripts/rapid/stackdeployment.py [new file with mode: 0755]

index b88bd7e..ab6e44f 100644 (file)
@@ -76,20 +76,36 @@ https://www.packer.io/docs/builders/openstack.html
 Note that this procedure is not only installing the necessary tools to run PROX,
 but also does some system optimizations (tuned). Check deploycentostools.sh for more details.
 
-Now you can run the createrapid.py file. Use help for more info on the usage:
-  # ./createrapid.py --help
-
-createrapid.py will use the OpenStack CLI to create the flavor, key-pair, network, image,
-servers, ...
-It will create a <STACK>.env file containing all info that will be used by runrapid.py
-to actually run the tests. Logging can be found in the CREATE<STACK>.log file
-You can use floating IP addresses by specifying the floating IP network
---floating_network NETWORK
-or directly connect through the INTERNAL_NETWORK by using the following parameter:
---floating_network NO
-/etc/resolv.conf will contain DNS info from the "best" interface. Since we are
-deploying VMs with multiple interface on different networks, this info might be
-taken from the "wrong" network (e.g. the dataplane network).
+Now you need to create a stack, that will deploy the PROX VMs using the PROX
+image built in the previous step. The stack needs to have an ouput section
+with the following outputs:
+outputs:
+  number_of_servers:
+    value: 
+      - <NUMBER_OF_SERVERS>   # A list of <NUMBER_OF_SERVERS>
+  server_name:
+    value: 
+      - - <SERVER_NAME>       # A list containing a list of <SERVER_NAME>
+  data_plane_ips:
+    value: 
+      - - <DATA_PLANE_IPS>    # A list containing a list of <DATA_PLANE_IPS>
+  data_plane_macs:
+    value: 
+      - - <DATA_PLANE_MACS>   # A list containing a list of <DATA_PLANE_MACS>
+  mngmt_ips:
+    value: 
+      - - <MNGMT_IP>          # A list containing a list of <MNGMT_IP>
+where
+    * <NUMBER_OF_SERVERS> is an int
+    * <SERVER_NAME> is a string
+    * <DATA_PLANE_IPS> is a list of strings
+    * <DATA_PLANE_MACS> is a list of strings
+    * <MNGMT_IP> is a string
+createrapid.py will take the input from config_file, to create an ssh keypair
+and stack (if not already existing). The tool will use the yaml files as
+specified in the config_file and create a <STACK>.env file, containing
+input used for runrapid.py.
 
 Now you can run the runrapid.py file. Use help for more info on the usage:
   # ./runrapid.py --help
@@ -114,8 +130,6 @@ openstack subnet create --network  fast-network  --subnet-range 20.20.20.0/24 --
 openstack port create --network fast-network --vnic-type direct --fixed-ip subnet=fast-subnet Port1
 openstack port create --network fast-network --vnic-type direct --fixed-ip subnet=fast-subnet Port2
 openstack port create --network fast-network --vnic-type direct --fixed-ip subnet=fast-subnet Port3
-Make sure to use the network and subnet in the createrapid parameters list. Port1, Port2 and Port3
-are being used in the *.env file.
 
 Note when doing tests using the gateway functionality on OVS:
 When a GW VM is sending packets on behalf of another VM (e.g. the generator), we need to make sure the OVS
@@ -130,9 +144,9 @@ neutron port-update xxxxxx --port_security_enabled=False
 
 An example of the env file generated by createrapid.py can be found below.
 Note that this file can be created manually in case the stack is created in a
-different way (not using the createrapid.py). This can be useful in case you are
-not using OpenStack as a VIM or when using special configurations that cannot be
-achieved using createrapid.py. Fields needed for runrapid are:
+different way than what is described in this text. This can be useful in case
+you are not using OpenStack as a VIM or when using special configurations that
+cannot be achieved using createrapid.py. Fields needed for runrapid are:
 * all info in the [Mx] sections
 * the key information in the [ssh] section
 * the total_number_of_vms information in the [rapid] section
@@ -145,20 +159,20 @@ total_number_of_machines = 3
 [M1]
 name = rapid-VM1
 admin_ip = 10.25.1.109
-dp_ip = 10.10.10.4
-dp_mac = fa:16:3e:25:be:25
+dp_ip1 = 10.10.10.4
+dp_mac1 = fa:16:3e:25:be:25
 
 [M2]
 name = rapid-VM2
 admin_ip = 10.25.1.110
-dp_ip = 10.10.10.7
-dp_mac = fa:16:3e:72:bf:e8
+dp_ip1 = 10.10.10.7
+dp_mac1 = fa:16:3e:72:bf:e8
 
 [M3]
 name = rapid-VM3
 admin_ip = 10.25.1.125
-dp_ip = 10.10.10.15
-dp_mac = fa:16:3e:69:f3:e7
+dp_ip1 = 10.10.10.15
+dp_mac1 = fa:16:3e:69:f3:e7
 
 [ssh]
 key = prox.pem
@@ -167,11 +181,3 @@ user = centos
 [Varia]
 vim = OpenStack
 stack = rapid
-vms = rapid.vms
-image = rapidVM
-image_file = rapidVM.qcow2
-dataplane_network = dataplane-network
-subnet = dpdk-subnet
-subnet_cidr = 10.10.10.0/24
-internal_network = admin_internal_net
-floating_network = admin_floating_net
index ad297b4..8621d1f 100755 (executable)
@@ -26,6 +26,8 @@ then
                 case $line in
                         isolated_cores=1-$MAXCOREID*)
                                 echo "Isolated CPU(s) OK, no reboot: $line">>$logfile
+                                sed -i 's/PubkeyAuthentication no/PubkeyAuthentication yes/g' /etc/ssh/sshd_config
+                                service sshd restart
                                 modprobe uio
                                 insmod /home/centos/dpdk/build/kmod/igb_uio.ko
                                 exit 0
diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/config_file b/VNFs/DPPD-PROX/helper-scripts/rapid/config_file
new file mode 100644 (file)
index 0000000..5e77e31
--- /dev/null
@@ -0,0 +1,8 @@
+[OpenStack]
+cloud_name = openstackL6
+stack_name = rapid
+heat_template = openstack-rapid.yaml
+heat_param = params_rapid.yaml
+keypair_name = prox_key
+user = centos
+push_gateway = None
index a7b1ec6..efdf5e1 100755 (executable)
@@ -1,7 +1,7 @@
 #!/usr/bin/python
 
 ##
-## Copyright (c) 2010-2019 Intel Corporation
+## Copyright (c) 2010-2020 Intel Corporation
 ##
 ## Licensed under the Apache License, Version 2.0 (the "License");
 ## you may not use this file except in compliance with the License.
 ## See the License for the specific language governing permissions and
 ## limitations under the License.
 ##
-
-from __future__ import print_function
-
-import os
-import stat
-import sys
-import time
-import subprocess
-import getopt
-import re
-import logging
-from logging.handlers import RotatingFileHandler
-from logging import handlers
-import ConfigParser
-
-version="19.11.21"
-stack = "rapid" #Default string for stack. This is not an OpenStack Heat stack, just a group of VMs
-vms = "rapid.vms" #Default string for vms file
-key = "prox" # default name for key
-image = "rapidVM" # default name for the image
-image_file = "rapidVM.qcow2"
-dataplane_network = "dataplane-network" # default name for the dataplane network
-subnet = "dpdk-subnet" #subnet for dataplane
-subnet_cidr="10.10.10.0/24" # cidr for dataplane
-internal_network="admin_internal_net"
-floating_network="admin_floating_net"
-loglevel="DEBUG" # sets log level for writing to file
-
-def usage():
-       print("usage: createrapid [--version] [-v]")
-       print("                   [--stack STACK_NAME]")
-       print("                   [--vms VMS_FILE]")
-       print("                   [--key KEY_NAME]")
-       print("                   [--image IMAGE_NAME]")
-       print("                   [--image_file IMAGE_FILE]")
-       print("                   [--dataplane_network DP_NETWORK]")
-       print("                   [--subnet DP_SUBNET]")
-       print("                   [--subnet_cidr SUBNET_CIDR]")
-       print("                   [--internal_network ADMIN_NETWORK]")
-       print("                   [--floating_network FLOATING_NETWORK]")
-       print("                   [--log DEBUG|INFO|WARNING|ERROR|CRITICAL]")
-       print("                   [-h] [--help]")
-       print("")
-       print("Command-line interface to createrapid")
-       print("")
-       print("optional arguments:")
-       print("  -v,  --version                 Show program's version number and exit")
-       print("  --stack STACK_NAME             Specify a name for the stack. Default is %s."%stack)
-       print("  --vms VMS_FILE                 Specify the vms file to be used. Default is %s."%vms)
-       print("  --key KEY_NAME                 Specify the key to be used. Default is %s."%key)
-       print("  --image IMAGE_NAME             Specify the image to be used. Default is %s."%image)
-       print("  --image_file IMAGE_FILE        Specify the image qcow2 file to be used. Default is %s."%image_file)
-       print("  --dataplane_network NETWORK    Specify the network name to be used for the dataplane. Default is %s."%dataplane_network)
-       print("  --subnet DP_SUBNET             Specify the subnet name to be used for the dataplane. Default is %s."%subnet)
-       print("  --subnet_cidr SUBNET_CIDR      Specify the subnet CIDR to be used for the dataplane. Default is %s."%subnet_cidr)
-       print("  --internal_network NETWORK     Specify the network name to be used for the control plane. Default is %s."%internal_network)
-       print("  --floating_network NETWORK     Specify the external floating ip network name. Default is %s. NO if no floating ip used."%floating_network)
-       print("  --log                          Specify logging level for log file output, screen output level is hard coded")
-       print("  -h, --help                     Show help message and exit.")
-       print("")
-
+from rapid_log import RapidLog
+from stackdeployment import StackDeployment
 try:
-       opts, args = getopt.getopt(sys.argv[1:], "vh", ["version","help", "vms=","stack=","key=","image=","image_file=","dataplane_network=","subnet=","subnet_cidr=","internal_network=","floating_network=","log="])
-except getopt.GetoptError as err:
-       print("===========================================")
-       print(str(err))
-       print("===========================================")
-       usage()
-       sys.exit(2)
-if args:
-       usage()
-       sys.exit(2)
-for opt, arg in opts:
-       if opt in ["-h", "--help"]:
-               usage()
-               sys.exit()
-       if opt in ["-v", "--version"]:
-               print("Rapid Automated Performance Indication for Dataplane "+version)
-               sys.exit()
-       if opt in ["--stack"]:
-               stack = arg
-               print ("Using '"+stack+"' as name for the stack")
-       elif opt in ["--vms"]:
-               vms = arg
-               print ("Using Virtual Machines Description: "+vms)
-       elif opt in ["--key"]:
-               key = arg
-               print ("Using key: "+key)
-       elif opt in ["--image"]:
-               image = arg
-               print ("Using image: "+image)
-       elif opt in ["--image_file"]:
-               image_file = arg
-               print ("Using qcow2 file: "+image_file)
-       elif opt in ["--dataplane_network"]:
-               dataplane_network = arg
-               print ("Using dataplane network: "+ dataplane_network)
-       elif opt in ["--subnet"]:
-               subnet = arg
-               print ("Using dataplane subnet: "+ subnet)
-       elif opt in ["--subnet_cidr"]:
-               subnet_cidr = arg
-               print ("Using dataplane subnet: "+ subnet_cidr)
-       elif opt in ["--internal_network"]:
-               internal_network = arg
-               print ("Using control plane network: "+ internal_network)
-       elif opt in ["--floating_network"]:
-               floating_network = arg
-               print ("Using floating ip network: "+ floating_network)
-       elif opt in ["--log"]:
-               loglevel = arg
-               print ("Log level: "+ loglevel)
-
-
-# create formatters
-screen_formatter = logging.Formatter("%(message)s")
-file_formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
-
-# get a top-level logger,
-# set its log level,
-# BUT PREVENT IT from propagating messages to the root logger
-#
-log = logging.getLogger()
-numeric_level = getattr(logging, loglevel.upper(), None)
-if not isinstance(numeric_level, int):
-       raise ValueError('Invalid log level: %s' % loglevel)
-log.setLevel(numeric_level)
-log.propagate = 0
-
-# create a console handler
-# and set its log level to the command-line option 
-# 
-console_handler = logging.StreamHandler(sys.stdout)
-console_handler.setLevel(logging.INFO)
-console_handler.setFormatter(screen_formatter)
-
-# create a file handler
-# and set its log level to DEBUG
-#
-log_file = 'CREATE' +stack +'.log'
-file_handler = logging.handlers.RotatingFileHandler(log_file, backupCount=10)
-#file_handler = log.handlers.TimedRotatingFileHandler(log_file, 'D', 1, 5)
-file_handler.setLevel(numeric_level)
-file_handler.setFormatter(file_formatter)
-
-# add handlers to the logger
-#
-log.addHandler(file_handler)
-log.addHandler(console_handler)
-
-# Check if log exists and should therefore be rolled
-needRoll = os.path.isfile(log_file)
-
-
-# This is a stale log, so roll it
-if needRoll:    
-       # Add timestamp
-       log.debug('\n---------\nLog closed on %s.\n---------\n' % time.asctime())
-       # Roll over on application start
-       log.handlers[0].doRollover()
-
-# Add timestamp
-log.debug('\n---------\nLog started on %s.\n---------\n' % time.asctime())
-
-log.debug("createrapid.py version: "+version)
-# Checking if the control network already exists, if not, stop the script
-log.debug("Checking control plane network: " + internal_network)
-cmd = 'openstack network list -f value -c Name'
-log.debug (cmd)
-Networks = subprocess.check_output(cmd , shell=True).decode().strip()
-if internal_network in Networks:
-       log.info("Control plane network (" + internal_network+")  already active")
-else:
-       log.exception("Control plane network " + internal_network + " not existing")
-       raise Exception("Control plane network " + internal_network + " not existing")
-
-# Checking if the floating ip network should be used. If yes, check if it exists and stop the script if it doesn't
-if floating_network !='NO':
-       log.debug("Checking floating ip network: " + floating_network)
-       if floating_network in Networks:
-               log.info("Floating ip network (" + floating_network + ")  already active")
-       else:
-               log.exception("Floating ip network " + floating_network + " not existing")
-               raise Exception("Floating ip network " + floating_network + " not existing")
-
-# Checking if the dataplane network already exists, if not create it
-log.debug("Checking dataplane network: " + dataplane_network)
-if dataplane_network in Networks:
-       # If the dataplane already exists, we are assuming that this network is already created before with the proper configuration, hence we do not check if the subnet is created etc...
-       log.info("Dataplane network (" + dataplane_network + ") already active")
-       subnet = "n/a: was already existing"
-       subnet_cidr = "n/a, was already existing"
-else:
-       log.info('Creating dataplane network ...')
-       cmd = 'openstack network create '+dataplane_network+' -f value -c status'
-       log.debug(cmd)
-       NetworkExist = subprocess.check_output(cmd , shell=True).decode().strip()
-       if 'ACTIVE' in NetworkExist:
-               log.info("Dataplane network created")
-               # Checking if the dataplane subnet already exists, if not create it
-               log.debug("Checking subnet: "+subnet)
-               cmd = 'openstack subnet list -f value -c Name'
-               log.debug (cmd)
-               Subnets = subprocess.check_output(cmd , shell=True).decode().strip()
-               if subnet in  Subnets:
-                       log.info("Subnet (" +subnet+ ") already exists")
-                       subnet = "n/a, was already existing"
-                       subnet_cidr = "n/a, was already existing"
-               else:
-                       log.info('Creating subnet ...')
-                       cmd = 'openstack subnet create --network ' + dataplane_network + ' --subnet-range ' + subnet_cidr +' --gateway none ' + subnet+' -f value -c name'
-                       log.debug(cmd)
-                       Subnets = subprocess.check_output(cmd , shell=True).decode().strip()
-                       if subnet in Subnets:
-                               log.info("Subnet created")
-                       else :
-                               log.exception("Failed to create subnet: " + subnet)
-                               raise Exception("Failed to create subnet: " + subnet)
-       else :
-               log.exception("Failed to create dataplane network: " + dataplane_network)
-               raise Exception("Failed to create dataplane network: " + dataplane_network)
-
-# Checking if the image already exists, if not create it
-log.debug("Checking image: " + image)
-cmd = 'openstack image list -f value -c Name'
-log.debug(cmd)
-Images = subprocess.check_output(cmd , shell=True).decode().strip()
-if image in Images:
-       log.info("Image (" + image + ") already available")
-       image_file="Don't know, was already existing"
-else:
-       log.info('Creating image ...')
-       cmd = 'openstack image create  -f value -c status --disk-format qcow2 --container-format bare --public --file ./'+image_file+ ' ' +image
-       log.debug(cmd)
-       ImageExist = subprocess.check_output(cmd , shell=True).decode().strip()
-       if 'active' in ImageExist:
-               log.info('Image created and active')
-#              cmd = 'openstack image set --property hw_vif_multiqueue_enabled="true" ' +image
-#              subprocess.check_call(cmd , shell=True)
-       else :
-               log.exception("Failed to create image")
-               raise Exception("Failed to create image")
-
-# Checking if the key already exists, if not create it
-log.debug("Checking key: "+key)
-cmd = 'openstack keypair list -f value -c Name'
-log.debug (cmd)
-KeyExist = subprocess.check_output(cmd , shell=True).decode().strip()
-if key in KeyExist:
-       log.info("Key (" + key + ") already installed")
-else:
-       log.info('Creating key ...')
-       cmd = 'openstack keypair create ' + key + '>' + key + '.pem'
-       log.debug(cmd)
-       subprocess.check_call(cmd , shell=True)
-       cmd = 'chmod 600 ' + key + '.pem'
-       subprocess.check_call(cmd, shell=True)
-       cmd = 'openstack keypair list -f value -c Name'
-       log.debug(cmd)
-       KeyExist = subprocess.check_output(cmd , shell=True).decode().strip()
-       if key in KeyExist:
-               log.info("Key created")
-       else :
-               log.exception("Failed to create key: " + key)
-               raise Exception("Failed to create key: " + key)
-
-ServerToBeCreated=[]
-ServerName=[]
-config = ConfigParser.RawConfigParser()
-vmconfig = ConfigParser.RawConfigParser()
-vmname = os.path.dirname(os.path.realpath(__file__))+'/' + vms
-#vmconfig.read_file(open(vmname))
-vmconfig.readfp(open(vmname))
-total_number_of_VMs = vmconfig.get('DEFAULT', 'total_number_of_vms')
-cmd = 'openstack server list -f value -c Name'
-log.debug (cmd)
-Servers = subprocess.check_output(cmd , shell=True).decode().strip()
-cmd = 'openstack flavor list -f value -c Name'
-log.debug (cmd)
-Flavors = subprocess.check_output(cmd , shell=True).decode().strip()
-for vm in range(1, int(total_number_of_VMs)+1):
-       flavor_info = vmconfig.get('VM%d'%vm, 'flavor_info')
-       flavor_meta_data = vmconfig.get('VM%d'%vm, 'flavor_meta_data')
-       boot_info = vmconfig.get('VM%d'%vm, 'boot_info')
-       SRIOV_port = vmconfig.get('VM%d'%vm, 'SRIOV_port')
-       SRIOV_mgmt_port = vmconfig.get('VM%d'%vm, 'SRIOV_mgmt_port')
-       ServerName.append('%s-VM%d'%(stack,vm))
-       flavor_name = '%s-VM%d-flavor'%(stack,vm)
-       log.debug("Checking server: " + ServerName[-1])
-       if ServerName[-1] in Servers:
-               log.info("Server (" + ServerName[-1] + ") already active")
-               ServerToBeCreated.append("no")
-       else:
-               ServerToBeCreated.append("yes")
-               # Checking if the flavor already exists, if not create it
-               log.debug("Checking flavor: " + flavor_name)
-               if flavor_name in Flavors:
-                       log.info("Flavor (" + flavor_name+") already installed")
-               else:
-                       log.info('Creating flavor ...')
-                       cmd = 'openstack flavor create %s %s -f value -c name'%(flavor_name,flavor_info)
-                       log.debug(cmd)
-                       NewFlavor = subprocess.check_output(cmd , shell=True).decode().strip()
-                       if flavor_name in NewFlavor:
-                               cmd = 'openstack flavor set %s %s'%(flavor_name, flavor_meta_data)
-                               log.debug(cmd)
-                               subprocess.check_call(cmd , shell=True)
-                               log.info("Flavor created")
-                       else :
-                               log.exception("Failed to create flavor: " + flavor_name)
-                               raise Exception("Failed to create flavor: " + flavor_name)
-               if SRIOV_mgmt_port == 'NO':
-                       nic_info = '--nic net-id=%s'%(internal_network)
-               else:
-                       nic_info = '--nic port-id=%s'%(SRIOV_mgmt_port)
-               if SRIOV_port == 'NO':
-                       nic_info = nic_info + ' --nic net-id=%s'%(dataplane_network)
-               else:
-                       for port in SRIOV_port.split(','):
-                               nic_info = nic_info + ' --nic port-id=%s'%(port)
-               if vm==int(total_number_of_VMs):
-                       # For the last server, we want to wait for the server creation to complete, so the next operations will succeeed (e.g. IP allocation)
-                       # Note that this waiting is not bullet proof. Imagine, we loop through all the VMs, and the last VM was already running, while the previous
-                       # VMs still needed to be created. Or the previous server creations take much longer than the last one.
-                       # In that case, we might be too fast when we query for the IP & MAC addresses.
-                       wait = '--wait'
-               else:
-                       wait = ''
-               log.info("Creating server...")
-               cmd = 'openstack server create --flavor %s --key-name %s --image %s %s %s %s %s'%(flavor_name,key,image,nic_info,boot_info,wait,ServerName[-1])
-               log.debug(cmd)
-               output = subprocess.check_output(cmd , shell=True).decode().strip()
-if floating_network != 'NO':
-       for vm in range(0, int(total_number_of_VMs)):
-               if ServerToBeCreated[vm] =="yes":
-                       log.info('Creating & Associating floating IP for ('+ServerName[vm]+')...')
-                       cmd = 'openstack server show %s -c addresses -f value |grep -Eo "%s=[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*" | cut -d"=" -f2'%(ServerName[vm],internal_network)
-                       log.debug(cmd)
-                       vmportIP = subprocess.check_output(cmd , shell=True).decode().strip()
-                       cmd = 'openstack port list -c ID -c "Fixed IP Addresses" | grep %s  | cut -d" " -f 2 ' %(vmportIP)
-                       log.debug(cmd)
-                       vmportID = subprocess.check_output(cmd , shell=True).decode().strip()
-                       cmd = 'openstack floating ip create --port %s %s'%(vmportID,floating_network)
-                       log.debug(cmd)
-                       output = subprocess.check_output(cmd , shell=True).decode().strip()
-
-config.add_section('rapid')
-config.set('rapid', 'loglevel', loglevel)
-config.set('rapid', 'version', version)
-config.set('rapid', 'total_number_of_machines', total_number_of_VMs)
-for vm in range(1, int(total_number_of_VMs)+1):
-       cmd = 'openstack server show %s'%(ServerName[vm-1])
-       log.debug(cmd)
-       output = subprocess.check_output(cmd , shell=True).decode().strip()
-       searchString = '.*%s=([0-9]*\.[0-9]*\.[0-9]*\.[0-9]*)' %(dataplane_network)
-       matchObj = re.search(searchString, output, re.DOTALL)
-       vmDPIP = matchObj.group(1)
-       searchString = '.*%s=([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+),*\s*([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)*' %(internal_network)
-       matchObj = re.search(searchString, output, re.DOTALL)
-       vmAdminIP = matchObj.group(2)
-       if vmAdminIP == None:
-               vmAdminIP = matchObj.group(1)
-       cmd = 'openstack port list |egrep  "\\b%s\\b" | tr -s " " | cut -d"|" -f 4'%(vmDPIP)
-       log.debug(cmd)
-       vmDPmac = subprocess.check_output(cmd , shell=True).decode().strip()
-       config.add_section('M%d'%vm)
-       config.set('M%d'%vm, 'name', ServerName[vm-1])
-       config.set('M%d'%vm, 'admin_ip', vmAdminIP)
-       config.set('M%d'%vm, 'dp_ip', vmDPIP)
-       config.set('M%d'%vm, 'dp_mac', vmDPmac)
-       log.info('%s: (admin IP: %s), (dataplane IP: %s), (dataplane MAC: %s)' % (ServerName[vm-1],vmAdminIP,vmDPIP,vmDPmac))
-
-config.add_section('ssh')
-config.set('ssh', 'key', key + '.pem')
-config.set('ssh', 'user', 'centos')
-config.add_section('Varia')
-config.set('Varia', 'VIM', 'OpenStack')
-config.set('Varia', 'stack', stack)
-config.set('Varia', 'VMs', vms)
-config.set('Varia', 'image', image)
-config.set('Varia', 'image_file', image_file)
-config.set('Varia', 'dataplane_network', dataplane_network)
-config.set('Varia', 'subnet', subnet)
-config.set('Varia', 'subnet_cidr', subnet_cidr)
-config.set('Varia', 'internal_network', internal_network)
-config.set('Varia', 'floating_network', floating_network)
-# Writing the environment file
-with open(stack+'.env', 'wb') as envfile:
-       config.write(envfile)
+    import configparser
+except ImportError:
+    # Python 2.x fallback
+    import ConfigParser as configparser
+
+class RapidStackManager(object):
+    @staticmethod
+    def parse_config(rapid_stack_params):
+        config = configparser.RawConfigParser()
+        config.read('config_file')
+        section = 'OpenStack'
+        options = config.options(section)
+        for option in options:
+            rapid_stack_params[option] = config.get(section, option)
+        return (rapid_stack_params)
+
+    @staticmethod
+    def deploy_stack(rapid_stack_params):
+        cloud_name = rapid_stack_params['cloud_name']
+        stack_name = rapid_stack_params['stack_name']
+        heat_template = rapid_stack_params['heat_template']
+        heat_param = rapid_stack_params['heat_param']
+        keypair_name = rapid_stack_params['keypair_name']
+        user = rapid_stack_params['user']
+        push_gateway = rapid_stack_params['push_gateway']
+        deployment = StackDeployment(cloud_name)
+        deployment.deploy(stack_name, keypair_name, heat_template, heat_param)
+        deployment.generate_env_file(user, push_gateway)
+
+def main():
+    rapid_stack_params = {}
+    RapidStackManager.parse_config(rapid_stack_params)
+    log_file = 'CREATE{}.log'.format(rapid_stack_params['stack_name'])
+    RapidLog.log_init(log_file, 'DEBUG', 'INFO', '2020.05.05')
+    #cloud_name = 'openstackL6'
+    #stack_name = 'rapid'
+    #heat_template = 'openstack-rapid.yaml'
+    #heat_param = 'params_rapid.yaml'
+    #keypair_name = 'prox_key'
+    #user = 'centos'
+    #push_gateway = None
+    RapidStackManager.deploy_stack(rapid_stack_params)
+
+if __name__ == "__main__":
+    main()
index 5e2cf3d..2f2e6fe 100644 (file)
@@ -90,7 +90,19 @@ function os_cfg()
        ${SUDO} cp -r ${WORK_DIR}/check-prox-system-setup.service /etc/systemd/system/
        ${SUDO} systemctl daemon-reload
        ${SUDO} systemctl enable check-prox-system-setup.service
-
+    # Following lines are added to fix the following issue: When the VM gets
+    # instantiated, the rapid scripts will try to ssh into the VM to start
+    # the testing. Once the script connects with ssh, it starts downloading
+    # config files and then start prox, etc... The problem is that when the VM
+    # boots, check_prox_system_setup.sh will check for some things and
+    # potentially reboot, resulting in loosing the ssh connection again.
+    # To fix this issue, the following lines are disabling ssh access for the 
+    # centos user. The script will not be able to connect to the VM till ssh
+    # access is restored after a reboot. Restoring ssh is now done by
+    # check-prox-system-setup.service
+       printf "\nMatch User centos\n" | ${SUDO} tee -a /etc/ssh/sshd_config
+       printf "%sPubkeyAuthentication no\n" "    " | ${SUDO} tee -a /etc/ssh/sshd_config
+       printf "%sPasswordAuthentication no\n" "    " | ${SUDO} tee -a /etc/ssh/sshd_config
        popd > /dev/null 2>&1
 }
 
index 2cf72f9..6ab5b45 100644 (file)
@@ -23,11 +23,13 @@ total_number_of_test_machines = 2
 name = InterruptTestMachine1
 config_file = irq.cfg
 cores = [1,2,3]
+monitor = False
 
 [TestM2]
 name = InterruptTestMachine2
 config_file = irq.cfg
 cores = [1,2,3]
+monitor = False
 
 [test1]
 test=irqtest
diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/openstack-rapid.yaml b/VNFs/DPPD-PROX/helper-scripts/rapid/openstack-rapid.yaml
new file mode 100644 (file)
index 0000000..2dec717
--- /dev/null
@@ -0,0 +1,117 @@
+heat_template_version: 2015-10-15
+
+description: >
+  Template for deploying n PROX instances. Teh template allows for deploying
+  multiple groups of PROX VMs. You can create a first group with certain
+  flavors, availability groups, etc... Another group can be created with
+  different characteristics.
+
+parameters:
+  public_net_name: {description: Public network to allocate (floating) IPs to VMs', type: string, default: admin_floating_net}
+  mgmt_net_name: {description: Name of PROX mgmt network to be created, type: string, default: admin_internal_net}
+  PROX_image: {description: Image name to use for PROX, type: string, default: rapidVM}
+  PROX_key: {description: DO NOT CHANGE THIS DEFAULT KEY NAME, type: string, default: rapid_key}
+  my_availability_zone: {description: availability_zone for Hosting VMs, type: string, default: nova}
+  security_group: {description: Security Group to use, type: string, default: prox_security_group}
+  PROXVM_count: {description: Total number of testVMs to create, type: number, default: 2}
+  PROX2VM_count: {description: Total number of testVMs to create, type: number, default: 1}
+
+# The following paramters are not used, but are here in case you want to also
+# create the management and dataplane networks in this template
+  mgmt_net_cidr: {description: PROX mgmt network CIDR, type: string, default: 20.20.1.0/24}
+  mgmt_net_gw: {description: PROX mgmt network gateway address, type: string, default: 20.20.1.1}
+  mgmt_net_pool_start: {description: Start of mgmt network IP address allocation pool, type: string, default: 20.20.1.100}
+  mgmt_net_pool_end: {description: End of mgmt network IP address allocation pool, type: string, default: 20.20.1.200}
+  data_net_name: {description: Name of PROX private network to be created, type: string, default: dataplane-network}
+  data_net_cidr: {description: PROX private network CIDR,type: string, default: 30.30.1.0/24}
+  data_net_pool_start: {description: Start of private network IP address allocation pool, type: string, default: 30.30.1.100}
+  data_net_pool_end: {description: End of private network IP address allocation pool, type: string, default: 30.30.1.200}
+  dns:
+    type: comma_delimited_list
+    label: DNS nameservers
+    description: Comma separated list of DNS nameservers for the management network.
+    default: '8.8.8.8'
+
+resources:
+  PROXVMs:
+    type: OS::Heat::ResourceGroup
+    description: Group of PROX VMs according to specs described in this section
+    properties:
+      count: { get_param: PROXVM_count }
+      resource_def:
+        type: rapid-openstack-server.yaml
+        properties:
+          PROX_availability_zone : {get_param: my_availability_zone}
+          PROX_security_group : {get_param: security_group}
+          PROX_image: {get_param: PROX_image}
+          PROX_key: {get_param: PROX_key}
+          PROX_server_name: rapidVM-%index%
+          PROX_public_net: {get_param: public_net_name}
+          PROX_mgmt_net_id: {get_param: mgmt_net_name}
+          PROX_data_net_id: {get_param: data_net_name}
+          PROX_config: {get_resource: MyConfig}
+    depends_on: MyConfig
+  
+  PROX2VMs:
+    type: OS::Heat::ResourceGroup
+    description: Group of PROX VMs according to specs described in this section
+    properties:
+      count: { get_param: PROX2VM_count }
+      resource_def:
+        type: rapid-openstack-server.yaml
+        properties:
+          PROX_availability_zone : {get_param: my_availability_zone}
+          PROX_security_group : {get_param: security_group}
+          PROX_image: {get_param: PROX_image}
+          PROX_key: {get_param: PROX_key}
+          PROX_server_name: rapidType2VM-%index%
+          PROX_public_net: {get_param: public_net_name}
+          PROX_mgmt_net_id: {get_param: mgmt_net_name}
+          PROX_data_net_id: {get_param: data_net_name}
+          PROX_config: {get_resource: MyConfig}
+    depends_on: MyConfig
+  
+  MyConfig:
+    type: OS::Heat::CloudConfig
+    properties:
+      cloud_config:
+        users:
+        - default
+        - name: rapid
+          groups: "users,root"
+          lock-passwd: false
+          passwd: 'test'
+          shell: "/bin/bash"
+          sudo: "ALL=(ALL) NOPASSWD:ALL"
+        ssh_pwauth: true
+        chpasswd:
+          list:  |
+              rapid:rapid
+          expire: False
+
+outputs:
+  number_of_servers:
+    description: List of number or PROX instance
+    value: 
+      - {get_param: PROXVM_count}
+      - {get_param: PROX2VM_count}
+  server_name:
+    description: List of list of names of the PROX instances
+    value: 
+      - {get_attr: [PROXVMs, name]}
+      - {get_attr: [PROX2VMs, name]}
+  mngmt_ips:
+    description: List of list of Management IPs of the VMs
+    value: 
+      - {get_attr: [PROXVMs, mngmt_ip]}
+      - {get_attr: [PROX2VMs, mngmt_ip]}
+  data_plane_ips:
+    description: List of list of list of DataPlane IPs of the VMs
+    value: 
+      - {get_attr: [PROXVMs, data_plane_ip]}
+      - {get_attr: [PROX2VMs, data_plane_ip]}
+  data_plane_macs:
+    description: List of list of list of DataPlane MACs of the VMs
+    value: 
+      - {get_attr: [PROXVMs, data_plane_mac]}
+      - {get_attr: [PROX2VMs, data_plane_mac]}
diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/params_rapid.yaml b/VNFs/DPPD-PROX/helper-scripts/rapid/params_rapid.yaml
new file mode 100644 (file)
index 0000000..3640905
--- /dev/null
@@ -0,0 +1,5 @@
+parameters:\r
+  public_net_name: admin_floating_net\r
+  PROX_image: rapidVM\r
+  my_availability_zone: nova\r
+  security_group: prox_security_group\r
index 82faa78..6e25e7f 100644 (file)
@@ -47,6 +47,7 @@ class prox_ctrl(object):
         retrying, and raise RuntimeError exception otherwise.
         """
         return self.run_cmd('true', True)
+
     def connect(self):
         attempts = 1
         RapidLog.debug("Trying to connect to VM which was just launched on %s, attempt: %d" % (self._ip, attempts))
diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/rapid-openstack-server.yaml b/VNFs/DPPD-PROX/helper-scripts/rapid/rapid-openstack-server.yaml
new file mode 100644 (file)
index 0000000..f1b5274
--- /dev/null
@@ -0,0 +1,82 @@
+heat_template_version: 2014-10-16
+
+description: single server resource used by resource groups.
+
+parameters:
+  PROX_public_net:
+    type: string
+  PROX_mgmt_net_id:
+    type: string
+  PROX_data_net_id:
+    type: string
+  PROX_server_name:
+    type: string
+  PROX_availability_zone:
+    type: string
+  PROX_security_group:
+    type: string
+  PROX_image:
+    type: string
+  PROX_key:
+    type: string
+  PROX_config:
+    type: string
+
+resources:
+  PROX_instance:
+    type: OS::Nova::Server
+    properties:
+      name: { get_param: PROX_server_name }
+      availability_zone : {get_param: PROX_availability_zone}
+      flavor: {get_resource: PROX_flavor}
+      image: {get_param: PROX_image}
+      key_name: {get_param: PROX_key}
+      networks:
+        - port: {get_resource: mgmt_port }
+        - port: {get_resource: data_port }
+      user_data: {get_param: PROX_config}    
+      user_data_format: RAW
+
+  PROX_flavor:
+    type: OS::Nova::Flavor
+    properties:
+      ram: 4096
+      vcpus: 4
+      disk: 80
+      extra_specs: {"hw:mem_page_size": "large","hw:cpu_policy": "dedicated","hw:cpu_thread_policy":"isolate"}
+
+  mgmt_port:
+    type: OS::Neutron::Port
+    properties:
+      network_id: { get_param: PROX_mgmt_net_id }
+      security_groups:
+        - {get_param: PROX_security_group}
+
+  floating_ip:
+    type: OS::Neutron::FloatingIP
+    properties:
+      floating_network: {get_param: PROX_public_net}
+      port_id: {get_resource: mgmt_port}
+
+  data_port:
+    type: OS::Neutron::Port
+    properties:
+      network_id: { get_param: PROX_data_net_id }
+      security_groups:
+        - {get_param: PROX_security_group}
+
+outputs:
+  name:
+    description: Name of the PROX instance
+    value: {get_attr: [PROX_instance, name]}
+  mngmt_ip:
+    description: Management IP of the VM
+    value: {get_attr: [floating_ip, floating_ip_address ]}
+  data_plane_ip:
+    description: List of DataPlane IPs of the VM
+    value:
+        - {get_attr: [data_port, fixed_ips, 0, ip_address]}
+  data_plane_mac:
+    description: List of DataPlane MACs of the VM
+    value:
+        - {get_attr: [data_port, mac_address]}
index b8071b4..d70fd50 100644 (file)
@@ -197,7 +197,7 @@ class FlowSizeTest(RapidTest):
                         endabs_tx = abs_tx
                         endabs_rx = abs_rx
                         if lat_warning or gen_warning or retry_warning:
-                            endwarning = '|        | {:177.177} |'.format(retry_warning + lat_warning + gen_warning)
+                            endwarning = '|        | {:186.186} |'.format(retry_warning + lat_warning + gen_warning)
                         success = True
                         success_message=' SUCCESS'
                         speed_prefix = lat_avg_prefix = lat_perc_prefix = lat_max_prefix = abs_drop_rate_prefix = drop_rate_prefix = bcolors.ENDC
index 553907b..2a5b51c 100644 (file)
@@ -78,15 +78,15 @@ class RapidGeneratorMachine(RapidMachine):
         speed_per_gen_core = speed / len(self.machine_params['gencores']) 
         self.socket.speed(speed_per_gen_core, self.machine_params['gencores'])
 
-    def set_udp_packet_size(self, size):
+    def set_udp_packet_size(self, frame_size):
         # We should check the gen.cfg to make sure we only send UDP packets
-        # Frame size is PROX pkt size + 4 bytes CRC
-        # PROX "pkt_size" i.e.  14-bytes L2 frame header + VLAN 4 bytes header + IP packet size
-        self.socket.set_size(self.machine_params['gencores'], 0, size)
+        # Frame size = PROX pkt size + 4 bytes CRC
+        # The set_size function takes the PROX packet size as a parameter
+        self.socket.set_size(self.machine_params['gencores'], 0, frame_size - 4)
         # 18 is the difference between the frame size and IP size = size of (MAC addresses, ethertype and FCS)
-        self.socket.set_value(self.machine_params['gencores'], 0, 16, size-18, 2)
+        self.socket.set_value(self.machine_params['gencores'], 0, 16, frame_size-18, 2)
         # 38 is the difference between the frame size and UDP size = 18 + size of IP header (=20)
-        self.socket.set_value(self.machine_params['gencores'], 0, 38, size-38, 2)
+        self.socket.set_value(self.machine_params['gencores'], 0, 38, frame_size-38, 2)
 
     def set_flows(self, number_of_flows):
         source_port,destination_port = RandomPortBits.get_bitmap(number_of_flows)
index bd25845..3901ae9 100644 (file)
@@ -41,7 +41,7 @@ class RapidLog(object):
     log = None
 
     @staticmethod
-    def log_init(test_params):
+    def log_init(log_file, loglevel, screenloglevel, version):
         # create formatters
         screen_formatter = logging.Formatter("%(message)s")
         file_formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
@@ -51,7 +51,7 @@ class RapidLog(object):
         # BUT PREVENT IT from propagating messages to the root logger
         #
         log = logging.getLogger()
-        numeric_level = getattr(logging, test_params['loglevel'].upper(), None)
+        numeric_level = getattr(logging, loglevel.upper(), None)
         if not isinstance(numeric_level, int):
             raise ValueError('Invalid log level: %s' % loglevel)
         log.setLevel(numeric_level)
@@ -62,7 +62,7 @@ class RapidLog(object):
         # 
         console_handler = logging.StreamHandler(sys.stdout)
         #console_handler.setLevel(logging.INFO)
-        numeric_screenlevel = getattr(logging, test_params['screenloglevel'].upper(), None)
+        numeric_screenlevel = getattr(logging, screenloglevel.upper(), None)
         if not isinstance(numeric_screenlevel, int):
             raise ValueError('Invalid screenlog level: %s' % screenloglevel)
         console_handler.setLevel(numeric_screenlevel)
@@ -71,7 +71,6 @@ class RapidLog(object):
         # create a file handler
         # and set its log level
         #
-        log_file = 'RUN{}.{}.log'.format(test_params['environment_file'],test_params['test_file'])
         file_handler = logging.handlers.RotatingFileHandler(log_file, backupCount=10)
         #file_handler = log.handlers.TimedRotatingFileHandler(log_file, 'D', 1, 5)
         file_handler.setLevel(numeric_level)
@@ -97,7 +96,7 @@ class RapidLog(object):
         # Add timestamp
         log.debug('\n---------\nLog started on %s.\n---------\n' % time.asctime())
 
-        log.debug("runrapid.py version: " + test_params['version'])
+        log.debug("runrapid.py version: " + version)
         RapidLog.log = log
 
     @staticmethod
index 864f84b..bebc748 100644 (file)
@@ -87,7 +87,7 @@ class RapidConfigParser(object):
                 section = 'TestM%d'%test_machine
                 options = testconfig.options(section)
                 for option in options:
-                    if option in ['prox_socket','prox_launch_exit']:
+                    if option in ['prox_socket','prox_launch_exit','monitor']:
                         machine[option] = testconfig.getboolean(section, option)
                     elif option in ['cores', 'gencores','latcores']:
                         machine[option] = ast.literal_eval(testconfig.get(section, option))
@@ -96,41 +96,41 @@ class RapidConfigParser(object):
                     for key in ['prox_socket','prox_launch_exit']:
                        if key not in machine.keys():
                            machine[key] = True
+                if 'monitor' not in machine.keys():
+                    machine['monitor'] = True
                 index = int(machine_map.get('TestM%d'%test_machine, 'machine_index'))
                 section = 'M%d'%index
                 options = config.options(section)
                 for option in options:
                     machine[option] = config.get(section, option)
-                if 'monitor' not in machine.keys():
-                    machine['monitor'] = True
-                else:    
-                    machine['monitor'] = config.getboolean(section, option)
                 machines.append(dict(machine))
         for machine in machines:
             dp_ports = []
             if 'dest_vm' in machine.keys():
                 index = 1
-                dp_ip_key = 'dp_ip{}'.format(index)
-                dp_mac_key = 'dp_mac{}'.format(index)
-                if dp_ip_key in machines[int(machine['dest_vm'])-1].keys() and \
-                        dp_mac_key in machines[int(machine['dest_vm'])-1].keys():
-                    dp_port = {'ip': machines[int(machine['dest_vm'])-1][dp_ip_key],
-                            'mac' : machines[int(machine['dest_vm'])-1][dp_mac_key]}
-                    dp_ports.append(dict(dp_port))
-                    index += 1
-                else:
-                    break
-                machine['dest_ports'] = list(dp_ports)
+                while True: 
+                    dp_ip_key = 'dp_ip{}'.format(index)
+                    dp_mac_key = 'dp_mac{}'.format(index)
+                    if dp_ip_key in machines[int(machine['dest_vm'])-1].keys() and \
+                            dp_mac_key in machines[int(machine['dest_vm'])-1].keys():
+                        dp_port = {'ip': machines[int(machine['dest_vm'])-1][dp_ip_key],
+                                'mac' : machines[int(machine['dest_vm'])-1][dp_mac_key]}
+                        dp_ports.append(dict(dp_port))
+                        index += 1
+                    else:
+                        break
+                    machine['dest_ports'] = list(dp_ports)
             gw_ips = []
             if 'gw_vm' in machine.keys():
                 index = 1
-                gw_ip_key = 'dp_ip{}'.format(index)
-                if gw_ip_key in machines[int(machine['gw_vm'])-1].keys():
-                    gw_ip = machines[int(machine['dest_vm'])-1][gw_ip_key]
-                    gw_ips.append(gw_ip)
-                    index += 1
-                else:
-                    break
-                machine['gw_ips'] = list(gw_ips)
+                while True:
+                    gw_ip_key = 'dp_ip{}'.format(index)
+                    if gw_ip_key in machines[int(machine['gw_vm'])-1].keys():
+                        gw_ip = machines[int(machine['dest_vm'])-1][gw_ip_key]
+                        gw_ips.append(gw_ip)
+                        index += 1
+                    else:
+                        break
+                    machine['gw_ips'] = list(gw_ips)
         test_params['machines'] = machines
         return (test_params)
index 1e6818e..5c10b27 100755 (executable)
@@ -46,7 +46,6 @@ class RapidTestManager(object):
 
     @staticmethod
     def run_tests(test_params):
-        RapidLog.log_init(test_params)
         test_params = RapidConfigParser.parse_config(test_params)
         RapidLog.debug(test_params)
         monitor_gen = monitor_sut = False
@@ -55,7 +54,9 @@ class RapidTestManager(object):
         machines = []
         for machine_params in test_params['machines']:
             if 'gencores' in machine_params.keys():
-                machine = RapidGeneratorMachine(test_params['key'], test_params['user'], test_params['vim_type'], test_params['rundir'], machine_params)
+                machine = RapidGeneratorMachine(test_params['key'],
+                        test_params['user'], test_params['vim_type'],
+                        test_params['rundir'], machine_params)
                 if machine_params['monitor']:
                     if monitor_gen:
                         RapidLog.exception("Can only monitor 1 generator")
@@ -66,7 +67,9 @@ class RapidTestManager(object):
                 else:
                     background_machines.append(machine)
             else:
-                machine = RapidMachine(test_params['key'], test_params['user'], test_params['vim_type'], test_params['rundir'], machine_params)
+                machine = RapidMachine(test_params['key'], test_params['user'],
+                        test_params['vim_type'], test_params['rundir'],
+                        machine_params)
                 if machine_params['monitor']:
                     if monitor_sut:
                         RapidLog.exception("Can only monitor 1 sut")
@@ -82,7 +85,8 @@ class RapidTestManager(object):
         result = True
         for test_param in test_params['tests']:
             RapidLog.info(test_param['test'])
-            if test_param['test'] in ['flowsizetest', 'TST009test', 'fixed_rate']:
+            if test_param['test'] in ['flowsizetest', 'TST009test',
+                    'fixed_rate']:
                 test = FlowSizeTest(test_param, test_params['lat_percentile'],
                         test_params['runtime'], test_params['pushgateway'],
                         test_params['environment_file'], gen_machine,
@@ -115,10 +119,14 @@ class RapidTestManager(object):
 def main():
     """Main function.
     """
-    test_params = RapidDefaults.test_params
+    test_params = RapidTestManager.get_defaults()
     # When no cli is used, the process_cli can be replaced by code modifying
     # test_params
     test_params = RapidCli.process_cli(test_params)
+    log_file = 'RUN{}.{}.log'.format(test_params['environment_file'],
+            test_params['test_file'])
+    RapidLog.log_init(log_file, test_params['loglevel'],
+            test_params['screenloglevel'] , test_params['version']  )
     test_result = RapidTestManager.run_tests(test_params)
     RapidLog.info('Test result is : {}'.format(test_result))
 
index 3c1a90e..18fdaa0 100755 (executable)
 ## This code will help in using tshark to decode packets that were dumped
 ## in the prox.log file as a result of dump, dump_tx or dump_rx commands
 
-egrep  '^[0-9]{4}|^[0-9]+\.' prox.log | text2pcap -q - - | tshark -r -
+#egrep  '^[0-9]{4}|^[0-9]+\.' prox.log | text2pcap -q - - | tshark -r -
+while read -r line ; do
+    if [[ $line =~ (^[0-9]{4}\s.*) ]] ;
+    then
+        echo "$line" >> tempshark.log
+    fi
+    if [[ $line =~ (^[0-9]+\.[0-9]+)(.*) ]] ;
+    then
+        date -d@"${BASH_REMATCH[1]}" -u +%H:%M:%S.%N >> tempshark.log
+    fi
+done < <(cat prox.log)
+text2pcap -t "%H:%M:%S." -q tempshark.log - | tshark -r -
+rm tempshark.log
diff --git a/VNFs/DPPD-PROX/helper-scripts/rapid/stackdeployment.py b/VNFs/DPPD-PROX/helper-scripts/rapid/stackdeployment.py
new file mode 100755 (executable)
index 0000000..25b9895
--- /dev/null
@@ -0,0 +1,145 @@
+#!/usr/bin/python
+import os_client_config
+import heatclient
+from heatclient.client import Client as Heat_Client
+from keystoneclient.v3 import Client as Keystone_Client
+from heatclient.common import template_utils
+from novaclient import client as NovaClient
+import yaml
+import os
+import time
+import sys
+from collections import OrderedDict
+from rapid_log import RapidLog
+
+class StackDeployment(object):
+    """Deployment class to create VMs for test execution in OpenStack
+    environment.
+    """
+    def __init__(self, cloud_name):
+#        RapidLog.log_init('CREATEStack.log', 'DEBUG', 'INFO', '2020.05.05')
+        self.dp_ips = []
+        self.dp_macs = []
+        self.mngmt_ips = []
+        self.names = []
+        self.number_of_servers = 0
+        self.cloud_name = cloud_name
+        self.heat_template = 'L6_heat_template.yaml'
+        self.heat_param = 'params_rapid.yaml'
+        self.cloud_config = os_client_config.OpenStackConfig().get_all_clouds()
+        ks_client = None
+        for cloud in self.cloud_config:
+            if cloud.name == self.cloud_name:
+                ks_client = Keystone_Client(**cloud.config['auth'])
+                break
+        if ks_client == None:
+            sys.exit()
+        heat_endpoint = ks_client.service_catalog.url_for(service_type='orchestration',
+        endpoint_type='publicURL')
+        self.heatclient = Heat_Client('1', heat_endpoint, token=ks_client.auth_token)
+        self.nova_client = NovaClient.Client(2, **cloud.config['auth']) 
+
+    def generate_paramDict(self):
+        for output in self.stack.output_list()['outputs']:
+            output_value = self.stack.output_show(output['output_key'])['output']['output_value']
+            for server_group_output in output_value:
+                if (output['output_key'] == 'number_of_servers'):
+                    self.number_of_servers += int (server_group_output)
+                elif (output['output_key'] == 'mngmt_ips'):
+                    for ip in server_group_output:
+                        self.mngmt_ips.append(ip)
+                elif (output['output_key'] == 'data_plane_ips'):
+                    for dps in server_group_output:
+                        self.dp_ips.append(dps)
+                elif (output['output_key'] == 'data_plane_macs'):
+                    for mac in server_group_output:
+                        self.dp_macs.append(mac)
+                elif (output['output_key'] == 'server_name'):
+                    for name in server_group_output:
+                        self.names.append(name)
+
+    def print_paramDict(self, user, push_gateway):
+        if not(len(self.dp_ips) == len(self.dp_macs) == len(self.mngmt_ips)):
+            sys.exit()
+        _ENV_FILE_DIR = os.path.dirname(os.path.realpath(__file__))
+        env_file = os.path.join(_ENV_FILE_DIR, self.stack.stack_name)+ '.env'
+        with open(env_file, 'w') as env_file:
+            env_file.write('[rapid]\n')
+            env_file.write('total_number_of_machines = {}\n'.format(str(self.number_of_servers)))
+            env_file.write('\n')
+            for count in range(self.number_of_servers):
+                env_file.write('[M' + str(count+1) + ']\n')
+                env_file.write('name = {}\n'.format(str(self.names[count])))
+                env_file.write('admin_ip = {}\n'.format(str(self.mngmt_ips[count])))
+                if type(self.dp_ips[count]) == list:
+                    for i, dp_ip in enumerate(self.dp_ips[count], start = 1):
+                        env_file.write('dp_ip{} = {}\n'.format(i, str(dp_ip)))
+                else:
+                    env_file.write('dp_ip1 = {}\n'.format(str(self.dp_ips[count])))
+                if type(self.dp_macs[count]) == list:
+                    for i, dp_mac in enumerate(self.dp_macs[count], start = 1):
+                        env_file.write('dp_mac{} = {}\n'.format(i, str(dp_mac)))
+                else:
+                    env_file.write('dp_mac1 = {}\n'.format(str(self.dp_macs[count])))
+                env_file.write('\n')
+            env_file.write('[ssh]\n')
+            env_file.write('key = {}\n'.format(self.private_key_filename))
+            env_file.write('user = {}\n'.format(user))
+            env_file.write('\n')
+            env_file.write('[Varia]\n')
+            env_file.write('vim = OpenStack\n')
+            env_file.write('stack = {}\n'.format(self.stack.stack_name))
+            env_file.write('pushgateway = {}\n'.format(push_gateway))
+
+    def create_stack(self, stack_name, stack_file_path, param_file):
+        files, template = template_utils.process_template_path(stack_file_path)
+        heat_parameters = open(param_file)
+        temp_params = yaml.load(heat_parameters,Loader=yaml.BaseLoader)
+        heat_parameters.close()
+        stack_created = self.heatclient.stacks.create(stack_name=stack_name, template=template,
+        parameters=temp_params["parameters"], files=files)
+        stack = self.heatclient.stacks.get(stack_created['stack']['id'], resolve_outputs=True)
+        # Poll at 5 second intervals, until the status is no longer 'BUILD'
+        while stack.stack_status == 'CREATE_IN_PROGRESS':
+            print('waiting..')
+            time.sleep(5)
+            stack = self.heatclient.stacks.get(stack_created['stack']['id'], resolve_outputs=True)
+        if stack.stack_status == 'CREATE_COMPLETE':    
+            return stack
+        else:
+            RapidLog.exception('Error in stack deployment')
+
+    def create_key(self):
+        keypair = self.nova_client.keypairs.create(name=self.key_name)
+        # Create a file for writing that can only be read and written by owner
+        fp = os.open(self.private_key_filename, os.O_WRONLY | os.O_CREAT, 0o600)
+        with os.fdopen(fp, 'w') as f:
+                f.write(keypair.private_key)
+        RapidLog.info('Keypair {} created'.format(self.key_name))
+
+    def IsDeployed(self, stack_name):
+        for stack in self.heatclient.stacks.list():
+            if stack.stack_name == stack_name:
+                RapidLog.info('Stack already existing: {}'.format(stack_name))
+                self.stack = stack
+                return True
+        return False
+
+    def IsKey(self):
+        keypairs = self.nova_client.keypairs.list()
+        if next((x for x in keypairs if x.name == self.key_name), None):
+            RapidLog.info('Keypair {} already exists'.format(self.key_name))
+            return True
+        return False
+
+    def deploy(self, stack_name, keypair_name, heat_template, heat_param):
+        self.key_name = keypair_name
+        self.private_key_filename = '{}.pem'.format(keypair_name)
+        if not self.IsDeployed(stack_name):
+            if not self.IsKey():
+                self.create_key()
+            self.stack = self.create_stack(stack_name, heat_template, heat_param)
+
+    def generate_env_file(self, user = 'centos', push_gateway = None):
+        self.generate_paramDict()
+        self.print_paramDict(user, push_gateway)