##TOSCA To HOT Translation##
Basic version information:
-1. tosca-paser is based on the stable version of 0.6 in openstack community;
-2. heat-translator is based on the stable version of 0.5 in openstack community;
+1. tosca-paser is based on the stable version of 0.7 in openstack community;
+2. heat-translator is based on the stable version of 0.7 in openstack community;
3. refer to the file of diff_file_list.rst for the detail difference between parser and upstream project.
+========================
+Team and repository tags
+========================
+
+.. image:: http://governance.openstack.org/badges/heat-translator.svg
+ :target: http://governance.openstack.org/reference/tags/index.html
+
+.. Change things from this point on
+
===============
Heat-Translator
===============
Three main directories related to the heat-translator are:
-1. hot: It is the generator, that has logic of converting TOSCA in memory graph to HOT yaml files.
+1. hot: It is the generator, that has logic of converting TOSCA in memory graph to HOT YAML files.
2. common: It has all the file that can support the execution of parser and generator.
3. tests: It contains test programs and more importantly several templates which are used for testing.
-# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
Alternatively, you can install a particular release of Heat-Translator as available at https://pypi.python.org/pypi/heat-translator.
-Once installation is complete, Heat-Translator is ready to use. Currently you can use it in following three ways.
+Once installation is complete, Heat-Translator is ready to use. The only required argument is ``--template-file``. By default, the ``--template-type`` is set to ``tosca`` which is the
+only supported template type at present. Currently you can use Heat-Translator in following three ways.
Translate and get output on command line. For example: ::
python heat_translator.py --template-file==<path to the YAML template> --template-type=<type of template e.g. tosca> --parameters="purpose=test"
The heat_translator.py test program is at the root level of the project. The program has currently tested with TOSCA templates.
-It requires two arguments::
-
-1. Path to the file that needs to be translated. The file, flat yaml template or CSAR, can be specified as a local file in your
+The only required argument is ``--template-file``. By default, the ``--template-type`` is set to ``tosca`` which is the only supported template type at present.
+The value to the ``--template-file`` is a path to the file that needs to be translated. The file, flat YAML template or CSAR, can be specified as a local file in your
system or via URL.
-2. Type of translation (e.g. tosca)
For example, a TOSCA hello world template can be translated by running the following command from the project location::
- python heat_translator.py --template-file=translator/tests/data/tosca_helloworld.yaml --template-type=tosca
+ python heat_translator.py --template-file=translator/tests/data/tosca_helloworld.yaml
This should produce a translated Heat Orchestration Template on the command line. The translated content can be saved to a desired file by setting --output-file=<path>.
For example: ::
An optional argument can be provided to handle user inputs parameters. Also, a template file can only be validated instead of translation by using --validate-only=true
optional argument. The command below shows an example usage::
- python heat_translator.py --template-file==<path to the YAML template> --template-type=<type of template e.g. tosca> --validate-only=true
+ python heat_translator.py --template-file=<path to the YAML template> --template-type=<type of template e.g. tosca> --validate-only=true
Alternatively, you can install a particular release of Heat-Translator as available at https://pypi.python.org/pypi/heat-translator.
In this case, you can simply run translation via CLI entry point::
capabilities. However, user may required to use these properties in template in certain circumstances, so in that case, TOSCA Compute can be extended
with these properties and later used in the node template. For a good example, refer to the ``translator/tests/data/test_tosca_flavor_and_image.yaml`` test
template.
-
+* The Heat-Translator can be used to automatically deploy translated TOSCA template given that your environment has python-heatclient and python-keystoneclient.
+ This can be achieved by providing ``--deploy`` argument to the Heat-Translator. You can provide desired stack name by providing it as ``--stack-name <name>``
+ argument. If you do not provide ``--stack-name``, an unique name will be created and used.
+ Below is an example command to deploy translated template with a desired stack name::
+ heat-translator --template-file translator/tests/data/tosca_helloworld.yaml --stack-name mystack --deploy
+* The Heat-Translator supports translation of TOSCA templates to Heat Senlin
+ resources (e.g. ``OS::Senlin::Cluster``) but that requires to use a specific
+ TOSCA node type called ``tosca.policies.Scaling.Cluster``.
+ The ``tosca.policies.Scaling.Cluster`` is a custom type that derives from
+ ``tosca.policies.Scaling``. For example usage, refer to the
+ ``tosca_cluster_autoscaling.yaml`` and ``hot_cluster_autoscaling.yaml``
+ provided under the ``translator/tests/data/autoscaling`` and
+ ``translator/tests/data/hot_output/autoscaling`` directories respectively in
+ the heat-translator project (``https://github.com/openstack/heat-translator``).
+ When you use ``tosca.policies.Scaling`` normative node type, the
+ Heat-Translator will translate it to ``OS::Heat::AutoScalingGroup`` Heat
+ resource. Related example templates, ``tosca_autoscaling.yaml`` and
+ ``hot_autoscaling.yaml`` can be found for reference purposes under the same
+ directory structure mentioned above.
+* With the version 0.7.0 of Heat-Translator, output of multiple template files
+ (for example, nested templates in autoscaling) can be accessed via newly
+ introduced API called ``translate_to_yaml_files_dict(<output_filename>)``
+ where ``<output_filename>`` is the name of file where you want to store parent
+ HOT template. The return value of this API call will be a dictionary in HOT
+ YAML with one or multiple file names as keys and translated content as values.
+ In order to use this on the command line, simply invoke Heat-Translator with
+ ``--output-file`` argument. Here, the parent template will be stored in the
+ value specified to the ``--output-file``. Whereas, child templates, if any,
+ will be saved at the same location of the parent template.
+
+ Below is an example of how to call the API in your code, where
+ ``translator`` is an instance of Heat-Translator::
+
+ yaml_files = translator.translate_to_yaml_files_dict(filename)
+
+ Below is an example of how to use this on the command line::
+
+ heat-translator --template-file translator/tests/data/autoscaling/tosca_autoscaling.yaml --output-file /tmp/hot.yaml
\ No newline at end of file
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
-pbr>=1.6 # Apache-2.0
+pbr>=1.8 # Apache-2.0
Babel>=2.3.4 # BSD
-cliff!=1.16.0,!=1.17.0,>=1.15.0 # Apache-2.0
-PyYAML>=3.1.0 # MIT
+cliff>=2.3.0 # Apache-2.0
+PyYAML>=3.10.0 # MIT
python-dateutil>=2.4.2 # BSD
six>=1.9.0 # MIT
-tosca-parser>=0.5.0 # Apache-2.0
+tosca-parser>=0.7.0 # Apache-2.0
+keystoneauth1>=2.18.0 # Apache-2.0
+python-novaclient>=7.1.0 # Apache-2.0
+python-heatclient>=1.6.1 # Apache-2.0
+python-glanceclient>=2.5.0 # Apache-2.0
+requests!=2.12.2,!=2.13.0,>=2.10.0 # Apache-2.0
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
-home-page = http://www.openstack.org/
+home-page = http://docs.openstack.org/developer/heat-translator/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.4
+ Programming Language :: Python :: 3.5
[files]
packages =
translator
+package_data =
+ conf = conf/*.conf
[entry_points]
openstack.cli.extension =
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking<0.11,>=0.10.0
-coverage>=3.6 # Apache-2.0
-discover # BSD
-fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
+coverage>=4.0 # Apache-2.0
+fixtures>=3.0.0 # Apache-2.0/BSD
oslotest>=1.10.0 # Apache-2.0
-oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
+oslosphinx>=4.7.0 # Apache-2.0
python-subunit>=0.0.18 # Apache-2.0/BSD
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
+sphinx>=1.5.1 # BSD
testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=1.4.0 # MIT
--- /dev/null
+#!/usr/bin/env bash
+
+# Client constraint file contains this client version pin that is in conflict
+# with installing the client from source. We should remove the version pin in
+# the constraints file before applying it for from-source installation.
+
+CONSTRAINTS_FILE="$1"
+shift 1
+
+set -e
+
+# NOTE(tonyb): Place this in the tox enviroment's log dir so it will get
+# published to logs.openstack.org for easy debugging.
+localfile="$VIRTUAL_ENV/log/upper-constraints.txt"
+
+if [[ "$CONSTRAINTS_FILE" != http* ]]; then
+ CONSTRAINTS_FILE="file://$CONSTRAINTS_FILE"
+fi
+# NOTE(tonyb): need to add curl to bindep.txt if the project supports bindep
+curl "$CONSTRAINTS_FILE" --insecure --progress-bar --output "$localfile"
+
+pip install -c"$localfile" openstack-requirements
+
+# This is the main purpose of the script: Allow local installation of
+# the current repo. It is listed in constraints file and thus any
+# install will be constrained and we need to unconstrain it.
+edit-constraints "$localfile" -- "$CLIENT_NAME"
+
+pip install -c"$localfile" -U "$@"
+exit $?
'exists and has no language definition errors.')
+class UnsupportedTypeError(TOSCAException):
+ msg_fmt = _('Type "%(type)s" is valid TOSCA type but translation '
+ 'support is not yet available.')
+
+
class ToscaClassAttributeError(TOSCAException):
msg_fmt = _('Class attribute referenced not found. '
'%(message)s. Check to see that it is defined.')
--- /dev/null
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+
+try:
+ import novaclient.client
+ client_available = True
+except ImportError:
+ client_available = False
+ pass
+
+log = logging.getLogger('heat-translator')
+
+
+PREDEF_FLAVORS = {
+ 'm1.xlarge': {'mem_size': 16384, 'disk_size': 160, 'num_cpus': 8},
+ 'm1.large': {'mem_size': 8192, 'disk_size': 80, 'num_cpus': 4},
+ 'm1.medium': {'mem_size': 4096, 'disk_size': 40, 'num_cpus': 2},
+ 'm1.small': {'mem_size': 2048, 'disk_size': 20, 'num_cpus': 1},
+ 'm1.tiny': {'mem_size': 512, 'disk_size': 1, 'num_cpus': 1},
+ 'm1.micro': {'mem_size': 128, 'disk_size': 0, 'num_cpus': 1},
+ 'm1.nano': {'mem_size': 64, 'disk_size': 0, 'num_cpus': 1}
+}
+
+SESSION = None
+
+FLAVORS = {}
+
+
+def get_flavors():
+ global FLAVORS
+
+ if FLAVORS:
+ return FLAVORS
+
+ if SESSION is not None and client_available:
+ try:
+ client = novaclient.client.Client("2", session=SESSION)
+ except Exception as e:
+ # Handles any exception coming from openstack
+ log.warn(_('Choosing predefined flavors since received '
+ 'Openstack Exception: %s') % str(e))
+ else:
+ for flv in client.flavors.list(detailed=True):
+ FLAVORS[str(flv.name)] = {
+ "mem_size": flv.ram,
+ "disk_size": flv.disk,
+ "num_cpus": flv.vcpus
+ }
+
+ if not FLAVORS:
+ FLAVORS = PREDEF_FLAVORS
+
+ return FLAVORS
--- /dev/null
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import logging
+
+try:
+ import glanceclient.client
+ client_available = True
+except ImportError:
+ client_available = False
+ pass
+
+log = logging.getLogger('heat-translator')
+
+
+PREDEF_IMAGES = {
+ 'ubuntu-software-config-os-init': {'architecture': 'x86_64',
+ 'type': 'Linux',
+ 'distribution': 'Ubuntu',
+ 'version': '14.04'},
+ 'ubuntu-12.04-software-config-os-init': {'architecture': 'x86_64',
+ 'type': 'Linux',
+ 'distribution': 'Ubuntu',
+ 'version': '12.04'},
+ 'fedora-amd64-heat-config': {'architecture': 'x86_64',
+ 'type': 'Linux',
+ 'distribution': 'Fedora',
+ 'version': '18.0'},
+ 'F18-x86_64-cfntools': {'architecture': 'x86_64',
+ 'type': 'Linux',
+ 'distribution': 'Fedora',
+ 'version': '19'},
+ 'Fedora-x86_64-20-20131211.1-sda': {'architecture': 'x86_64',
+ 'type': 'Linux',
+ 'distribution': 'Fedora',
+ 'version': '20'},
+ 'cirros-0.3.1-x86_64-uec': {'architecture': 'x86_64',
+ 'type': 'Linux',
+ 'distribution': 'CirrOS',
+ 'version': '0.3.1'},
+ 'cirros-0.3.2-x86_64-uec': {'architecture': 'x86_64',
+ 'type': 'Linux',
+ 'distribution': 'CirrOS',
+ 'version': '0.3.2'},
+ 'rhel-6.5-test-image': {'architecture': 'x86_64',
+ 'type': 'Linux',
+ 'distribution': 'RHEL',
+ 'version': '6.5'}
+}
+
+SESSION = None
+
+IMAGES = {}
+
+
+def get_images():
+ global IMAGES
+
+ if IMAGES:
+ return IMAGES
+
+ if SESSION is not None and client_available:
+ try:
+ client = glanceclient.client.Client("2", session=SESSION)
+ except Exception as e:
+ # Handles any exception coming from openstack
+ log.warn(_('Choosing predefined images since received '
+ 'Openstack Exception: %s') % str(e))
+ else:
+ for image in client.images.list():
+ image_name = image.name.encode('ascii', 'ignore')
+ metadata = ["architecture", "type", "distribution", "version"]
+ if any(key in image.keys() for key in metadata):
+ IMAGES[image_name] = {}
+ for key in metadata:
+ if key in image.keys():
+ IMAGES[image_name][key] = image[key]
+
+ if not IMAGES:
+ IMAGES = PREDEF_IMAGES
+
+ return IMAGES
import os
import re
import requests
+import six
from six.moves.urllib.parse import urlparse
+import tempfile
import yaml
+import zipfile
from toscaparser.utils.gettextutils import _
import toscaparser.utils.yamlparser
def get_dict(yaml_file):
'''Returns the dictionary representation of the given YAML spec.'''
try:
- return yaml.load(open(yaml_file))
+ return yaml.safe_load(open(yaml_file))
except IOError:
return None
class TranslationUtils(object):
@staticmethod
- def compare_tosca_translation_with_hot(tosca_file, hot_file, params):
+ def compare_tosca_translation_with_hot(tosca_file, hot_files, params):
'''Verify tosca translation against the given hot specification.
inputs:
if not a_file:
tosca_tpl = tosca_file
- expected_hot_tpl = os.path.join(
- os.path.dirname(os.path.abspath(__file__)), hot_file)
+ expected_hot_templates = []
+ for hot_file in hot_files:
+ expected_hot_templates.append(os.path.join(
+ os.path.dirname(os.path.abspath(__file__)), hot_file))
tosca = ToscaTemplate(tosca_tpl, params, a_file)
translate = TOSCATranslator(tosca, params)
- output = translate.translate()
- output_dict = toscaparser.utils.yamlparser.simple_parse(output)
- expected_output_dict = YamlUtils.get_dict(expected_hot_tpl)
- return CompareUtils.diff_dicts(output_dict, expected_output_dict)
+ basename = os.path.basename(hot_files[0])
+ output_hot_templates = translate.translate_to_yaml_files_dict(basename)
+ output_dict = {}
+ for output_hot_template_name in output_hot_templates:
+ output_dict[output_hot_template_name] = \
+ toscaparser.utils.yamlparser.simple_parse(
+ output_hot_templates[output_hot_template_name])
+
+ expected_output_dict = {}
+ for expected_hot_template in expected_hot_templates:
+ expected_output_dict[os.path.basename(expected_hot_template)] = \
+ YamlUtils.get_dict(expected_hot_template)
+
+ return CompareUtils.diff_dicts(expected_output_dict, output_dict)
class UrlUtils(object):
def str_to_num(value):
"""Convert a string representation of a number into a numeric type."""
- if isinstance(value, numbers.Number):
+ if isinstance(value, numbers.Number) \
+ or isinstance(value, six.integer_types) \
+ or isinstance(value, float):
return value
try:
return int(value)
except ValueError:
- return float(value)
+ try:
+ return float(value)
+ except ValueError:
+ return None
def check_for_env_variables():
if access_dict is None:
return None
return access_dict['access']['token']['id']
+
+
+def decompress(zip_file, dir=None):
+ """Decompress Zip file
+
+ Decompress any zip file. For example, TOSCA CSAR
+
+ inputs:
+ zip_file: file in zip format
+ dir: directory to decompress zip. If not provided an unique temporary
+ directory will be generated and used.
+ return:
+ dir: absolute path to the decopressed directory
+ """
+ if not dir:
+ dir = tempfile.NamedTemporaryFile().name
+ with zipfile.ZipFile(zip_file, "r") as zf:
+ zf.extractall(dir)
+ return dir
+
+
+def get_dict_value(dict_item, key, get_files):
+ if key in dict_item:
+ return get_files.append(dict_item[key])
+ for k, v in dict_item.items():
+ if isinstance(v, dict):
+ get_dict_value(v, key, get_files)
class=handlers.SysLogHandler
formatter=form01
level=INFO
+# for linux
args=('/dev/log', handlers.SysLogHandler.LOG_SYSLOG)
+# for mac
+#args=('/var/run/syslog', handlers.SysLogHandler.LOG_SYSLOG)
[handler_NullHandler]
class=NullHandler
self.description = description
def get_dict_output(self):
- return {self.name: {'value': self.value,
- 'description': self.description}}
+ if self.description:
+ return {self.name: {'value': self.value,
+ 'description': self.description}}
+ else:
+ return {self.name: {'value': self.value}}
DELETION_POLICY) = \
('type', 'properties', 'metadata',
'depends_on', 'update_policy', 'deletion_policy')
+
+policy_type = ['tosca.policies.Placement',
+ 'tosca.policies.Scaling',
+ 'tosca.policies.Scaling.Cluster']
log = logging.getLogger('heat-translator')
def __init__(self, nodetemplate, name=None, type=None, properties=None,
metadata=None, depends_on=None,
- update_policy=None, deletion_policy=None):
+ update_policy=None, deletion_policy=None, csar_dir=None):
log.debug(_('Translating TOSCA node type to HOT resource type.'))
self.nodetemplate = nodetemplate
if name:
self.name = nodetemplate.name
self.type = type
self.properties = properties or {}
+
+ self.csar_dir = csar_dir
# special case for HOT softwareconfig
+ cwd = os.getcwd()
if type == 'OS::Heat::SoftwareConfig':
config = self.properties.get('config')
- if config:
- implementation_artifact = config.get('get_file')
+ if isinstance(config, dict):
+ if self.csar_dir:
+ os.chdir(self.csar_dir)
+ implementation_artifact = os.path.abspath(config.get(
+ 'get_file'))
+ else:
+ implementation_artifact = config.get('get_file')
if implementation_artifact:
filename, file_extension = os.path.splitext(
implementation_artifact)
if self.properties.get('group') is None:
self.properties['group'] = 'script'
-
+ os.chdir(cwd)
self.metadata = metadata
# The difference between depends_on and depends_on_nodes is
# scenarios and cannot be fixed or hard coded here
operations_deploy_sequence = ['create', 'configure', 'start']
- operations = HotResource._get_all_operations(self.nodetemplate)
+ operations = HotResource.get_all_operations(self.nodetemplate)
# create HotResource for each operation used for deployment:
# create, start, configure
hosting_server = None
if self.nodetemplate.requirements is not None:
hosting_server = self._get_hosting_server()
+
+ sw_deployment_resouce = HOTSoftwareDeploymentResources(hosting_server)
+ server_key = sw_deployment_resouce.server_key
+ servers = sw_deployment_resouce.servers
+ sw_deploy_res = sw_deployment_resouce.software_deployment
+
+ # hosting_server is None if requirements is None
+ hosting_on_server = hosting_server if hosting_server else None
+ base_type = HotResource.get_base_type_str(
+ self.nodetemplate.type_definition)
+ # if we are on a compute node the host is self
+ if hosting_on_server is None and base_type == 'tosca.nodes.Compute':
+ hosting_on_server = self.name
+ servers = {'get_resource': self.name}
+
+ cwd = os.getcwd()
for operation in operations.values():
if operation.name in operations_deploy_sequence:
config_name = node_name + '_' + operation.name + '_config'
deploy_name = node_name + '_' + operation.name + '_deploy'
+ if self.csar_dir:
+ os.chdir(self.csar_dir)
+ get_file = os.path.abspath(operation.implementation)
+ else:
+ get_file = operation.implementation
hot_resources.append(
HotResource(self.nodetemplate,
config_name,
'OS::Heat::SoftwareConfig',
{'config':
- {'get_file': operation.implementation}}))
-
- # hosting_server is None if requirements is None
- hosting_on_server = (hosting_server.name if
- hosting_server else None)
- if operation.name == reserve_current:
+ {'get_file': get_file}},
+ csar_dir=self.csar_dir))
+ if operation.name == reserve_current and \
+ base_type != 'tosca.nodes.Compute':
deploy_resource = self
self.name = deploy_name
- self.type = 'OS::Heat::SoftwareDeployment'
+ self.type = sw_deploy_res
self.properties = {'config': {'get_resource': config_name},
- 'server': {'get_resource':
- hosting_on_server},
+ server_key: servers,
'signal_transport': 'HEAT_SIGNAL'}
- deploy_lookup[operation.name] = self
+ deploy_lookup[operation] = self
else:
sd_config = {'config': {'get_resource': config_name},
- 'server': {'get_resource':
- hosting_on_server},
+ server_key: servers,
'signal_transport': 'HEAT_SIGNAL'}
deploy_resource = \
HotResource(self.nodetemplate,
deploy_name,
- 'OS::Heat::SoftwareDeployment',
- sd_config)
+ sw_deploy_res,
+ sd_config, csar_dir=self.csar_dir)
hot_resources.append(deploy_resource)
- deploy_lookup[operation.name] = deploy_resource
+ deploy_lookup[operation] = deploy_resource
lifecycle_inputs = self._get_lifecycle_inputs(operation)
if lifecycle_inputs:
deploy_resource.properties['input_values'] = \
lifecycle_inputs
+ os.chdir(cwd)
# Add dependencies for the set of HOT resources in the sequence defined
# in operations_deploy_sequence
# TODO(anyone): find some better way to encode this implicit sequence
group = {}
+ op_index_min = None
+ op_index_max = -1
for op, hot in deploy_lookup.items():
# position to determine potential preceding nodes
- op_index = operations_deploy_sequence.index(op)
- for preceding_op in \
+ op_index = operations_deploy_sequence.index(op.name)
+ if op_index_min is None or op_index < op_index_min:
+ op_index_min = op_index
+ if op_index > op_index_max:
+ op_index_max = op_index
+ for preceding_op_name in \
reversed(operations_deploy_sequence[:op_index]):
- preceding_hot = deploy_lookup.get(preceding_op)
+ preceding_hot = deploy_lookup.get(
+ operations.get(preceding_op_name))
if preceding_hot:
hot.depends_on.append(preceding_hot)
hot.depends_on_nodes.append(preceding_hot)
group[preceding_hot] = hot
break
+ if op_index_max >= 0:
+ last_deploy = deploy_lookup.get(operations.get(
+ operations_deploy_sequence[op_index_max]))
+ else:
+ last_deploy = None
+
# save this dependency chain in the set of HOT resources
self.group_dependencies.update(group)
for hot in hot_resources:
hot.group_dependencies.update(group)
- return hot_resources
+ roles_deploy_resource = self._handle_ansiblegalaxy_roles(
+ hot_resources, node_name, servers)
+
+ # add a dependency to this ansible roles deploy to
+ # the first "classic" deploy generated for this node
+ if roles_deploy_resource and op_index_min:
+ first_deploy = deploy_lookup.get(operations.get(
+ operations_deploy_sequence[op_index_min]))
+ first_deploy.depends_on.append(roles_deploy_resource)
+ first_deploy.depends_on_nodes.append(roles_deploy_resource)
+
+ return hot_resources, deploy_lookup, last_deploy
+
+ def _handle_ansiblegalaxy_roles(self, hot_resources, initial_node_name,
+ hosting_on_server):
+ artifacts = self.get_all_artifacts(self.nodetemplate)
+ install_roles_script = ''
+
+ sw_deployment_resouce = \
+ HOTSoftwareDeploymentResources(hosting_on_server)
+ server_key = sw_deployment_resouce.server_key
+ sw_deploy_res = sw_deployment_resouce.software_deployment
+ for artifact_name, artifact in artifacts.items():
+ artifact_type = artifact.get('type', '').lower()
+ if artifact_type == 'tosca.artifacts.ansiblegalaxy.role':
+ role = artifact.get('file', None)
+ if role:
+ install_roles_script += 'ansible-galaxy install ' + role \
+ + '\n'
+
+ if install_roles_script:
+ # remove trailing \n
+ install_roles_script = install_roles_script[:-1]
+ # add shebang and | to use literal scalar type (for multiline)
+ install_roles_script = '|\n#!/bin/bash\n' + install_roles_script
+
+ config_name = initial_node_name + '_install_roles_config'
+ deploy_name = initial_node_name + '_install_roles_deploy'
+ hot_resources.append(
+ HotResource(self.nodetemplate, config_name,
+ 'OS::Heat::SoftwareConfig',
+ {'config': install_roles_script},
+ csar_dir=self.csar_dir))
+ sd_config = {'config': {'get_resource': config_name},
+ server_key: hosting_on_server,
+ 'signal_transport': 'HEAT_SIGNAL'}
+ deploy_resource = \
+ HotResource(self.nodetemplate, deploy_name,
+ sw_deploy_res,
+ sd_config, csar_dir=self.csar_dir)
+ hot_resources.append(deploy_resource)
+
+ return deploy_resource
def handle_connectsto(self, tosca_source, tosca_target, hot_source,
hot_target, config_location, operation):
elif config_location == 'source':
hosting_server = self._get_hosting_server()
hot_depends = hot_source
+ sw_deployment_resouce = HOTSoftwareDeploymentResources(hosting_server)
+ server_key = sw_deployment_resouce.server_key
+ servers = sw_deployment_resouce.servers
+ sw_deploy_res = sw_deployment_resouce.software_deployment
+
deploy_name = tosca_source.name + '_' + tosca_target.name + \
'_connect_deploy'
sd_config = {'config': {'get_resource': self.name},
- 'server': {'get_resource': hosting_server.name},
+ server_key: servers,
'signal_transport': 'HEAT_SIGNAL'}
deploy_resource = \
HotResource(self.nodetemplate,
deploy_name,
- 'OS::Heat::SoftwareDeployment',
+ sw_deploy_res,
sd_config,
- depends_on=[hot_depends])
+ depends_on=[hot_depends], csar_dir=self.csar_dir)
connect_inputs = self._get_connect_inputs(config_location, operation)
if connect_inputs:
deploy_resource.properties['input_values'] = connect_inputs
# handle hosting server for the OS:HEAT::SoftwareDeployment
# from the TOSCA nodetemplate, traverse the relationship chain
# down to the server
- if self.type == 'OS::Heat::SoftwareDeployment':
+ sw_deploy_group = \
+ HOTSoftwareDeploymentResources.HOT_SW_DEPLOYMENT_GROUP_RESOURCE
+ sw_deploy = HOTSoftwareDeploymentResources.HOT_SW_DEPLOYMENT_RESOURCE
+
+ if self.properties.get('servers') and \
+ self.properties.get('server'):
+ del self.properties['server']
+ if self.type == sw_deploy_group or self.type == sw_deploy:
# skip if already have hosting
# If type is NodeTemplate, look up corresponding HotResrouce
- host_server = self.properties.get('server')
- if host_server is None or not host_server['get_resource']:
+ host_server = self.properties.get('servers') \
+ or self.properties.get('server')
+ if host_server is None:
raise Exception(_("Internal Error: expecting host "
"in software deployment"))
- elif isinstance(host_server['get_resource'], NodeTemplate):
+
+ elif isinstance(host_server.get('get_resource'), NodeTemplate):
self.properties['server']['get_resource'] = \
host_server['get_resource'].name
+ elif isinstance(host_server, dict) and \
+ not host_server.get('get_resource'):
+ self.properties['servers'] = \
+ host_server
+
def top_of_chain(self):
dependent = self.group_dependencies.get(self)
if dependent is None:
else:
return dependent.top_of_chain()
+ # this function allows to provides substacks as external files
+ # those files will be dumped along the output file.
+ #
+ # return a dict of filename-content
+ def extract_substack_templates(self, base_filename, hot_template_version):
+ return {}
+
+ # this function asks the resource to embed substacks
+ # into the main template, if any.
+ # this is used when the final output is stdout
+ def embed_substack_templates(self, hot_template_version):
+ pass
+
def get_dict_output(self):
resource_sections = OrderedDict()
resource_sections[TYPE] = self.type
inputs = operation.value.get('inputs')
deploy_inputs = {}
if inputs:
- for name, value in six.iteritems(inputs):
+ for name, value in inputs.items():
deploy_inputs[name] = value
return deploy_inputs
inputs = operation.get('pre_configure_source').get('inputs')
deploy_inputs = {}
if inputs:
- for name, value in six.iteritems(inputs):
+ for name, value in inputs.items():
deploy_inputs[name] = value
return deploy_inputs
def _get_hosting_server(self, node_template=None):
# find the server that hosts this software by checking the
# requirements and following the hosting chain
+ hosting_servers = []
+ host_exists = False
this_node_template = self.nodetemplate \
if node_template is None else node_template
for requirement in this_node_template.requirements:
- for requirement_name, assignment in six.iteritems(requirement):
+ for requirement_name, assignment in requirement.items():
for check_node in this_node_template.related_nodes:
# check if the capability is Container
if isinstance(assignment, dict):
if node_name and node_name == check_node.name:
if self._is_container_type(requirement_name,
check_node):
- return check_node
- elif check_node.related_nodes:
+ hosting_servers.append(check_node.name)
+ host_exists = True
+ elif check_node.related_nodes and not host_exists:
return self._get_hosting_server(check_node)
+ if hosting_servers:
+ return hosting_servers
return None
def _is_container_type(self, requirement_name, node):
# capability is a list of dict
# For now just check if it's type tosca.nodes.Compute
# TODO(anyone): match up requirement and capability
- base_type = HotResource.get_base_type(node.type_definition)
- if base_type.type == 'tosca.nodes.Compute':
+ base_type = HotResource.get_base_type_str(node.type_definition)
+ if base_type == 'tosca.nodes.Compute':
return True
else:
return False
return tosca_props
@staticmethod
- def _get_all_operations(node):
+ def get_all_artifacts(nodetemplate):
+ # workaround bug in the parser
+ base_type = HotResource.get_base_type_str(nodetemplate.type_definition)
+ if base_type in policy_type:
+ artifacts = {}
+ else:
+ artifacts = nodetemplate.type_definition.get_value('artifacts',
+ parent=True)
+ if not artifacts:
+ artifacts = {}
+ tpl_artifacts = nodetemplate.entity_tpl.get('artifacts')
+ if tpl_artifacts:
+ artifacts.update(tpl_artifacts)
+
+ return artifacts
+
+ @staticmethod
+ def get_all_operations(node):
operations = {}
for operation in node.interfaces:
operations[operation.name] = operation
return node_type
else:
return HotResource.get_base_type(node_type.parent_type)
- else:
+ return node_type.type
+
+ @staticmethod
+ def get_base_type_str(node_type):
+ if isinstance(node_type, six.string_types):
return node_type
+ if node_type.parent_type is not None:
+ parent_type_str = None
+ if isinstance(node_type.parent_type, six.string_types):
+ parent_type_str = node_type.parent_type
+ else:
+ parent_type_str = node_type.parent_type.type
+
+ if parent_type_str and parent_type_str.endswith('.Root'):
+ return node_type.type
+ else:
+ return HotResource.get_base_type_str(node_type.parent_type)
+
+ return node_type.type
+
+
+class HOTSoftwareDeploymentResources(object):
+ """Provides HOT Software Deployment resources
+
+ SoftwareDeployment or SoftwareDeploymentGroup Resource
+ """
+
+ HOT_SW_DEPLOYMENT_RESOURCE = 'OS::Heat::SoftwareDeployment'
+ HOT_SW_DEPLOYMENT_GROUP_RESOURCE = 'OS::Heat::SoftwareDeploymentGroup'
+
+ def __init__(self, hosting_server=None):
+ self.software_deployment = self.HOT_SW_DEPLOYMENT_RESOURCE
+ self.software_deployment_group = self.HOT_SW_DEPLOYMENT_GROUP_RESOURCE
+ self.server_key = 'server'
+ self.hosting_server = hosting_server
+ self.servers = {}
+ if hosting_server is not None:
+ if len(self.hosting_server) == 1:
+ if isinstance(hosting_server, list):
+ self.servers['get_resource'] = self.hosting_server[0]
+ else:
+ for server in self.hosting_server:
+ self.servers[server] = {'get_resource': server}
+ self.software_deployment = self.software_deployment_group
+ self.server_key = 'servers'
from collections import OrderedDict
import logging
+import os
import textwrap
from toscaparser.utils.gettextutils import _
import yaml
('heat_template_version', 'description', 'parameter_groups',
'parameters', 'resources', 'outputs', '__undefined__')
- VERSIONS = (LATEST,) = ('2014-10-16',)
+ VERSIONS = (LATEST,) = ('2013-05-23',)
def __init__(self):
self.resources = []
nodes.append((node_key, node_value))
return yaml.nodes.MappingNode(u'tag:yaml.org,2002:map', nodes)
- def output_to_yaml(self):
+ def output_to_yaml_files_dict(self, base_filename,
+ hot_template_version=LATEST):
+ yaml_files_dict = {}
+ base_filename, ext = os.path.splitext(base_filename)
+
+ # convert from inlined substack to a substack defined in another file
+ for resource in self.resources:
+ yaml_files_dict.update(
+ resource.extract_substack_templates(base_filename,
+ hot_template_version))
+
+ yaml_files_dict[base_filename + ext] = \
+ self.output_to_yaml(hot_template_version, False)
+
+ return yaml_files_dict
+
+ def output_to_yaml(self, hot_template_version=LATEST,
+ embed_substack_templates=True):
log.debug(_('Converting translated output to yaml format.'))
+
+ if embed_substack_templates:
+ # fully inlined substack by storing the template as a blob string
+ for resource in self.resources:
+ resource.embed_substack_templates(hot_template_version)
+
dict_output = OrderedDict()
# Version
- version_string = self.VERSION + ": " + self.LATEST + "\n\n"
+ version_string = self.VERSION + ": " + hot_template_version + "\n\n"
# Description
desc_str = ""
dict_output.update({self.OUTPUTS: all_outputs})
yaml.add_representer(OrderedDict, self.represent_ordereddict)
+ yaml.add_representer(dict, self.represent_ordereddict)
yaml_string = yaml.dump(dict_output, default_flow_style=False)
# get rid of the '' from yaml.dump around numbers
- yaml_string = yaml_string.replace('\'', '')
+ # also replace double return lines with a single one
+ # seems to be a bug in the serialization of multiline literal scalars
+ yaml_string = yaml_string.replace('\'', '') .replace('\n\n', '\n')
return version_string + desc_str + yaml_string
'server, http://<IP>:3000',
'value':
{'get_attr':
- ['app_server', 'first_address']}},
+ ['app_server', 'networks', 'private', 0]}},
'mongodb_url':
{'description': 'URL for the mongodb server.',
'value':
{'get_attr':
- ['mongo_server', 'first_address']}}}
+ ['mongo_server', 'networks', 'private', 0]}}}
hot_translation_dict = \
toscaparser.utils.yamlparser.simple_parse(hot_translation)
--- /dev/null
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from toscaparser.nodetemplate import NodeTemplate
+from toscaparser.policy import Policy
+from toscaparser.tests.base import TestCase
+import toscaparser.utils.yamlparser
+from translator.hot.tosca.tosca_compute import ToscaCompute
+from translator.hot.tosca.tosca_policies_scaling import ToscaAutoscaling
+
+
+class AutoscalingTest(TestCase):
+
+ def _tosca_scaling_test(self, tpl_snippet, expectedprops):
+ nodetemplates = (toscaparser.utils.yamlparser.
+ simple_parse(tpl_snippet)['node_templates'])
+ policies = (toscaparser.utils.yamlparser.
+ simple_parse(tpl_snippet)['policies'])
+ name = list(nodetemplates.keys())[0]
+ policy_name = list(policies[0].keys())[0]
+ for policy in policies:
+ tpl = policy[policy_name]
+ targets = tpl["targets"]
+ properties = tpl["properties"]
+ try:
+ nodetemplate = NodeTemplate(name, nodetemplates)
+ toscacompute = ToscaCompute(nodetemplate)
+ toscacompute.handle_properties()
+ policy = Policy(policy_name, tpl, targets,
+ properties, "node_templates")
+ toscascaling = ToscaAutoscaling(policy)
+ parameters = toscascaling.handle_properties([toscacompute])
+ self.assertEqual(parameters[0].properties, expectedprops)
+ except Exception:
+ raise
+
+ def test_compute_with_scaling(self):
+ tpl_snippet = '''
+ node_templates:
+ my_server_1:
+ type: tosca.nodes.Compute
+ capabilities:
+ host:
+ properties:
+ num_cpus: 2
+ disk_size: 10 GB
+ mem_size: 512 MB
+ os:
+ properties:
+ # host Operating System image properties
+ architecture: x86_64
+ type: Linux
+ distribution: RHEL
+ version: 6.5
+ policies:
+ - asg:
+ type: tosca.policies.Scaling
+ description: Simple node autoscaling
+ targets: [my_server_1]
+ triggers:
+ resize_compute:
+ description: trigger
+ condition:
+ constraint: utilization greater_than 50%
+ period: 60
+ evaluations: 1
+ method: average
+ properties:
+ min_instances: 2
+ max_instances: 10
+ default_instances: 3
+ increment: 1
+ '''
+
+ expectedprops = {'desired_capacity': 3,
+ 'max_size': 10,
+ 'min_size': 2,
+ 'resource': {'type': 'asg_res.yaml'}}
+
+ self._tosca_scaling_test(
+ tpl_snippet,
+ expectedprops)
# License for the specific language governing permissions and limitations
# under the License.
-import json
import mock
-from mock import patch
from toscaparser.nodetemplate import NodeTemplate
from toscaparser.tests.base import TestCase
-from toscaparser.utils.gettextutils import _
import toscaparser.utils.yamlparser
from translator.hot.tosca.tosca_compute import ToscaCompute
nodetemplates = (toscaparser.utils.yamlparser.
simple_parse(tpl_snippet)['node_templates'])
name = list(nodetemplates.keys())[0]
- try:
- nodetemplate = NodeTemplate(name, nodetemplates)
- nodetemplate.validate()
- toscacompute = ToscaCompute(nodetemplate)
- toscacompute.handle_properties()
- if not self._compare_properties(toscacompute.properties,
- expectedprops):
- raise Exception(_("Hot Properties are not"
- " same as expected properties"))
- except Exception:
- # for time being rethrowing. Will be handled future based
- # on new development in Glance and Graffiti
- raise
+ nodetemplate = NodeTemplate(name, nodetemplates)
+ nodetemplate.validate()
+ toscacompute = ToscaCompute(nodetemplate)
+ toscacompute.handle_properties()
- def _compare_properties(self, hotprops, expectedprops):
- return all(item in hotprops.items() for item in expectedprops.items())
+ self.assertEqual(expectedprops, toscacompute.properties)
def test_node_compute_with_host_and_os_capabilities(self):
tpl_snippet = '''
#left intentionally
'''
expectedprops = {'flavor': 'm1.large',
- 'image': None,
'user_data_format': 'SOFTWARE_CONFIG',
'software_config_transport': 'POLL_SERVER_HEAT'}
self._tosca_compute_test(
#left intentionally
'''
expectedprops = {'flavor': None,
- 'image': None,
'user_data_format': 'SOFTWARE_CONFIG',
'software_config_transport': 'POLL_SERVER_HEAT'}
self._tosca_compute_test(
type: tosca.nodes.Compute
'''
expectedprops = {'flavor': None,
- 'image': None,
'user_data_format': 'SOFTWARE_CONFIG',
'software_config_transport': 'POLL_SERVER_HEAT'}
self._tosca_compute_test(
#left intentionally
'''
expectedprops = {'flavor': None,
- 'image': None,
'user_data_format': 'SOFTWARE_CONFIG',
'software_config_transport': 'POLL_SERVER_HEAT'}
self._tosca_compute_test(
mem_size: 4 GB
'''
expectedprops = {'flavor': 'm1.large',
- 'image': None,
'user_data_format': 'SOFTWARE_CONFIG',
'software_config_transport': 'POLL_SERVER_HEAT'}
self._tosca_compute_test(
disk_size: 10 GB
'''
expectedprops = {'flavor': 'm1.large',
- 'image': None,
'user_data_format': 'SOFTWARE_CONFIG',
'software_config_transport': 'POLL_SERVER_HEAT'}
self._tosca_compute_test(
num_cpus: 4
'''
expectedprops = {'flavor': 'm1.large',
- 'image': None,
'user_data_format': 'SOFTWARE_CONFIG',
'software_config_transport': 'POLL_SERVER_HEAT'}
self._tosca_compute_test(
tpl_snippet,
expectedprops)
- @patch('requests.post')
- @patch('requests.get')
- @patch('os.getenv')
- def test_node_compute_with_nova_flavor(self, mock_os_getenv,
- mock_get, mock_post):
+ @mock.patch('translator.common.flavors.get_flavors')
+ def test_node_compute_with_nova_flavor(self, mock_flavor):
tpl_snippet = '''
node_templates:
server:
disk_size: 1 GB
mem_size: 1 GB
'''
- with patch('translator.common.utils.'
- 'check_for_env_variables') as mock_check_env:
- mock_check_env.return_value = True
- mock_os_getenv.side_effect = ['demo', 'demo',
- 'demo', 'http://abc.com/5000/',
- 'demo', 'demo',
- 'demo', 'http://abc.com/5000/']
- mock_ks_response = mock.MagicMock()
- mock_ks_response.status_code = 200
- mock_ks_content = {
- 'access': {
- 'token': {
- 'id': 'd1dfa603-3662-47e0-b0b6-3ae7914bdf76'
- },
- 'serviceCatalog': [{
- 'type': 'compute',
- 'endpoints': [{
- 'publicURL': 'http://abc.com'
- }]
- }]
- }
- }
- mock_ks_response.content = json.dumps(mock_ks_content)
- mock_nova_response = mock.MagicMock()
- mock_nova_response.status_code = 200
- mock_flavor_content = {
- 'flavors': [{
- 'name': 'm1.mock_flavor',
- 'ram': 1024,
- 'disk': 1,
- 'vcpus': 1
- }]
- }
- mock_nova_response.content = \
- json.dumps(mock_flavor_content)
- mock_post.return_value = mock_ks_response
- mock_get.return_value = mock_nova_response
- expectedprops = {'flavor': 'm1.mock_flavor',
- 'image': None,
- 'user_data_format': 'SOFTWARE_CONFIG',
- 'software_config_transport': 'POLL_SERVER_HEAT'}
- self._tosca_compute_test(
- tpl_snippet,
- expectedprops)
+ mock_flavor.return_value = {
+ 'm1.mock_flavor': {
+ 'mem_size': 1024,
+ 'disk_size': 1,
+ 'num_cpus': 1}
+ }
+ expectedprops = {'flavor': 'm1.mock_flavor',
+ 'user_data_format': 'SOFTWARE_CONFIG',
+ 'software_config_transport': 'POLL_SERVER_HEAT'}
+ self._tosca_compute_test(tpl_snippet, expectedprops)
- @patch('requests.post')
- @patch('requests.get')
- @patch('os.getenv')
- def test_node_compute_without_nova_flavor(self, mock_os_getenv,
- mock_get, mock_post):
+ @mock.patch('translator.common.images.get_images')
+ def test_node_compute_with_glance_image(self, mock_images):
tpl_snippet = '''
node_templates:
server:
num_cpus: 1
disk_size: 1 GB
mem_size: 1 GB
+ os:
+ properties:
+ architecture: x86_64
+ type: Linux
+ distribution: Fake Distribution
+ version: 19.0
'''
- with patch('translator.common.utils.'
- 'check_for_env_variables') as mock_check_env:
- mock_check_env.return_value = True
- mock_os_getenv.side_effect = ['demo', 'demo',
- 'demo', 'http://abc.com/5000/']
- mock_ks_response = mock.MagicMock()
- mock_ks_content = {}
- mock_ks_response.content = json.dumps(mock_ks_content)
- expectedprops = {'flavor': 'm1.small',
- 'image': None,
- 'user_data_format': 'SOFTWARE_CONFIG',
- 'software_config_transport': 'POLL_SERVER_HEAT'}
- self._tosca_compute_test(
- tpl_snippet,
- expectedprops)
+ mock_images.return_value = {
+ 'fake-image-foobar': {'architecture': 'x86_64',
+ 'type': 'Linux',
+ 'distribution': 'Fake Distribution',
+ 'version': '19.0'},
+ 'fake-image-foobar-old': {'architecture': 'x86_64',
+ 'type': 'Linux',
+ 'distribution': 'Fake Distribution',
+ 'version': '18.0'}
+ }
+ expectedprops = {'flavor': 'm1.small',
+ 'image': 'fake-image-foobar',
+ 'user_data_format': 'SOFTWARE_CONFIG',
+ 'software_config_transport': 'POLL_SERVER_HEAT'}
+ self._tosca_compute_test(tpl_snippet, expectedprops)
toscatype = 'tosca.nodes.BlockStorage'
- def __init__(self, nodetemplate):
+ def __init__(self, nodetemplate, csar_dir=None):
super(ToscaBlockStorage, self).__init__(nodetemplate,
- type='OS::Cinder::Volume')
+ type='OS::Cinder::Volume',
+ csar_dir=csar_dir)
pass
def handle_properties(self):
# attribute for the matching resource. Unless there is additional
# runtime support, this should be a one to one mapping.
if attribute == 'volume_id':
- attr['get_resource'] = args[0]
+ attr['get_resource'] = self.name
return attr
toscatype = 'tosca.nodes.BlockStorageAttachment'
- def __init__(self, template, nodetemplates, instance_uuid, volume_id):
+ def __init__(self, template, nodetemplates, instance_uuid, volume_id,
+ csar_dir=None):
super(ToscaBlockStorageAttachment,
- self).__init__(template, type='OS::Cinder::VolumeAttachment')
+ self).__init__(template, type='OS::Cinder::VolumeAttachment',
+ csar_dir=csar_dir)
self.nodetemplates = nodetemplates
self.instance_uuid = {'get_resource': instance_uuid}
self.volume_id = {'get_resource': volume_id}
self.properties.pop('device')
def handle_life_cycle(self):
- pass
+ return None, None, None
# License for the specific language governing permissions and limitations
# under the License.
-import json
import logging
-import requests
from toscaparser.utils.gettextutils import _
+from translator.common import flavors as nova_flavors
+from translator.common import images as glance_images
import translator.common.utils
from translator.hot.syntax.hot_resource import HotResource
+
log = logging.getLogger('heat-translator')
# Name used to dynamically load appropriate map class.
TARGET_CLASS_NAME = 'ToscaCompute'
-# A design issue to be resolved is how to translate the generic TOSCA server
-# properties to OpenStack flavors and images. At the Atlanta design summit,
-# there was discussion on using Glance to store metadata and Graffiti to
-# describe artifacts. We will follow these projects to see if they can be
-# leveraged for this TOSCA translation.
-# For development purpose at this time, we temporarily hardcode a list of
-# flavors and images here
-FLAVORS = {'m1.xlarge': {'mem_size': 16384, 'disk_size': 160, 'num_cpus': 8},
- 'm1.large': {'mem_size': 8192, 'disk_size': 80, 'num_cpus': 4},
- 'm1.medium': {'mem_size': 4096, 'disk_size': 40, 'num_cpus': 2},
- 'm1.small': {'mem_size': 2048, 'disk_size': 20, 'num_cpus': 1},
- 'm1.tiny': {'mem_size': 512, 'disk_size': 1, 'num_cpus': 1},
- 'm1.micro': {'mem_size': 128, 'disk_size': 0, 'num_cpus': 1},
- 'm1.nano': {'mem_size': 64, 'disk_size': 0, 'num_cpus': 1}}
-
-IMAGES = {'ubuntu-software-config-os-init': {'architecture': 'x86_64',
- 'type': 'Linux',
- 'distribution': 'Ubuntu',
- 'version': '14.04'},
- 'ubuntu-12.04-software-config-os-init': {'architecture': 'x86_64',
- 'type': 'Linux',
- 'distribution': 'Ubuntu',
- 'version': '12.04'},
- 'fedora-amd64-heat-config': {'architecture': 'x86_64',
- 'type': 'Linux',
- 'distribution': 'Fedora',
- 'version': '18.0'},
- 'F18-x86_64-cfntools': {'architecture': 'x86_64',
- 'type': 'Linux',
- 'distribution': 'Fedora',
- 'version': '19'},
- 'Fedora-x86_64-20-20131211.1-sda': {'architecture': 'x86_64',
- 'type': 'Linux',
- 'distribution': 'Fedora',
- 'version': '20'},
- 'cirros-0.3.1-x86_64-uec': {'architecture': 'x86_64',
- 'type': 'Linux',
- 'distribution': 'CirrOS',
- 'version': '0.3.1'},
- 'cirros-0.3.2-x86_64-uec': {'architecture': 'x86_64',
- 'type': 'Linux',
- 'distribution': 'CirrOS',
- 'version': '0.3.2'},
- 'rhel-6.5-test-image': {'architecture': 'x86_64',
- 'type': 'Linux',
- 'distribution': 'RHEL',
- 'version': '6.5'}}
-
class ToscaCompute(HotResource):
'''Translate TOSCA node type tosca.nodes.Compute.'''
('architecture', 'distribution', 'type', 'version')
toscatype = 'tosca.nodes.Compute'
- def __init__(self, nodetemplate):
+ ALLOWED_NOVA_SERVER_PROPS = \
+ ('admin_pass', 'availability_zone', 'block_device_mapping',
+ 'block_device_mapping_v2', 'config_drive', 'diskConfig', 'flavor',
+ 'flavor_update_policy', 'image', 'image_update_policy', 'key_name',
+ 'metadata', 'name', 'networks', 'personality', 'reservation_id',
+ 'scheduler_hints', 'security_groups', 'software_config_transport',
+ 'user_data', 'user_data_format', 'user_data_update_policy')
+
+ def __init__(self, nodetemplate, csar_dir=None):
super(ToscaCompute, self).__init__(nodetemplate,
- type='OS::Nova::Server')
+ type='OS::Nova::Server',
+ csar_dir=csar_dir)
# List with associated hot port resources with this server
self.assoc_port_resources = []
pass
self.properties['software_config_transport'] = 'POLL_SERVER_HEAT'
tosca_props = self.get_tosca_props()
for key, value in tosca_props.items():
- self.properties[key] = value
+ if key in self.ALLOWED_NOVA_SERVER_PROPS:
+ self.properties[key] = value
# To be reorganized later based on new development in Glance and Graffiti
def translate_compute_flavor_and_image(self,
if os_cap_props:
image = self._best_image(os_cap_props)
hot_properties['flavor'] = flavor
- hot_properties['image'] = image
+ if image:
+ hot_properties['image'] = image
+ else:
+ hot_properties.pop('image', None)
return hot_properties
- def _create_nova_flavor_dict(self):
- '''Populates and returns the flavors dict using Nova ReST API'''
- try:
- access_dict = translator.common.utils.get_ks_access_dict()
- access_token = translator.common.utils.get_token_id(access_dict)
- if access_token is None:
- return None
- nova_url = translator.common.utils.get_url_for(access_dict,
- 'compute')
- if not nova_url:
- return None
- nova_response = requests.get(nova_url + '/flavors/detail',
- headers={'X-Auth-Token':
- access_token})
- if nova_response.status_code != 200:
- return None
- flavors = json.loads(nova_response.content)['flavors']
- flavor_dict = dict()
- for flavor in flavors:
- flavor_name = str(flavor['name'])
- flavor_dict[flavor_name] = {
- 'mem_size': flavor['ram'],
- 'disk_size': flavor['disk'],
- 'num_cpus': flavor['vcpus'],
- }
- except Exception as e:
- # Handles any exception coming from openstack
- log.warn(_('Choosing predefined flavors since received '
- 'Openstack Exception: %s') % str(e))
- return None
- return flavor_dict
-
- def _populate_image_dict(self):
- '''Populates and returns the images dict using Glance ReST API'''
- images_dict = {}
- try:
- access_dict = translator.common.utils.get_ks_access_dict()
- access_token = translator.common.utils.get_token_id(access_dict)
- if access_token is None:
- return None
- glance_url = translator.common.utils.get_url_for(access_dict,
- 'image')
- if not glance_url:
- return None
- glance_response = requests.get(glance_url + '/v2/images',
- headers={'X-Auth-Token':
- access_token})
- if glance_response.status_code != 200:
- return None
- images = json.loads(glance_response.content)["images"]
- for image in images:
- image_resp = requests.get(glance_url + '/v2/images/' +
- image["id"],
- headers={'X-Auth-Token':
- access_token})
- if image_resp.status_code != 200:
- continue
- metadata = ["architecture", "type", "distribution", "version"]
- image_data = json.loads(image_resp.content)
- if any(key in image_data.keys() for key in metadata):
- images_dict[image_data["name"]] = dict()
- for key in metadata:
- if key in image_data.keys():
- images_dict[image_data["name"]][key] = \
- image_data[key]
- else:
- continue
-
- except Exception as e:
- # Handles any exception coming from openstack
- log.warn(_('Choosing predefined flavors since received '
- 'Openstack Exception: %s') % str(e))
- return images_dict
-
def _best_flavor(self, properties):
log.info(_('Choosing the best flavor for given attributes.'))
# Check whether user exported all required environment variables.
- flavors = FLAVORS
- if translator.common.utils.check_for_env_variables():
- resp = self._create_nova_flavor_dict()
- if resp:
- flavors = resp
+ flavors = nova_flavors.get_flavors()
# start with all flavors
match_all = flavors.keys()
def _best_image(self, properties):
# Check whether user exported all required environment variables.
- images = IMAGES
- if translator.common.utils.check_for_env_variables():
- resp = self._populate_image_dict()
- if resp and len(resp.keys()) > 0:
- images = resp
+ images = glance_images.get_images()
match_all = images.keys()
architecture = properties.get(self.ARCHITECTURE)
if architecture is None:
return this_list
matching_images = []
for image in this_list:
- if this_dict[image][attr].lower() == str(prop).lower():
+ if attr in this_dict[image]:
+ if this_dict[image][attr].lower() == str(prop).lower():
+ matching_images.insert(0, image)
+ else:
matching_images.append(image)
return matching_images
attriute.'))
if attribute == 'private_address' or \
attribute == 'public_address':
- attr['get_attr'] = [self.name, 'first_address']
+ attr['get_attr'] = [self.name, 'networks', 'private', 0]
return attr
toscatype = 'tosca.nodes.Database'
- def __init__(self, nodetemplate):
- super(ToscaDatabase, self).__init__(nodetemplate)
+ def __init__(self, nodetemplate, csar_dir=None):
+ super(ToscaDatabase, self).__init__(nodetemplate, csar_dir=csar_dir)
pass
def handle_properties(self):
toscatype = 'tosca.nodes.DBMS'
- def __init__(self, nodetemplate):
- super(ToscaDbms, self).__init__(nodetemplate)
+ def __init__(self, nodetemplate, csar_dir=None):
+ super(ToscaDbms, self).__init__(nodetemplate, csar_dir=csar_dir)
pass
def handle_properties(self):
existing_resource_id = None
- def __init__(self, nodetemplate):
+ def __init__(self, nodetemplate, csar_dir=None):
super(ToscaNetwork, self).__init__(nodetemplate,
- type='OS::Neutron::Net')
+ type='OS::Neutron::Net',
+ csar_dir=csar_dir)
pass
def handle_properties(self):
self.existing_resource_id = value
break
elif key == 'segmentation_id':
- # net_props['segmentation_id'] = \
- # tosca_props['segmentation_id']
# Hardcode to vxlan for now until we add the network type
# and physical network to the spec.
net_props['value_specs'] = {'provider:segmentation_id':
toscatype = 'tosca.nodes.network.Port'
- def __init__(self, nodetemplate):
+ def __init__(self, nodetemplate, csar_dir=None):
super(ToscaNetworkPort, self).__init__(nodetemplate,
- type='OS::Neutron::Port')
+ type='OS::Neutron::Port',
+ csar_dir=csar_dir)
# Default order
self.order = 0
pass
toscatype = 'tosca.nodes.ObjectStorage'
- def __init__(self, nodetemplate):
+ def __init__(self, nodetemplate, csar_dir=None):
super(ToscaObjectStorage, self).__init__(nodetemplate,
- type='OS::Swift::Container')
+ type='OS::Swift::Container',
+ csar_dir=csar_dir)
pass
def handle_properties(self):
toscatype = 'tosca.policies.Placement'
- def __init__(self, policy):
+ def __init__(self, policy, csar_dir=None):
super(ToscaPolicies, self).__init__(policy,
- type='OS::Nova::ServerGroup')
+ type='OS::Nova::ServerGroup',
+ csar_dir=csar_dir)
self.policy = policy
def handle_properties(self, resources):
--- /dev/null
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+
+from collections import OrderedDict
+import yaml
+
+from translator.hot.syntax.hot_resource import HotResource
+# Name used to dynamically load appropriate map class.
+TARGET_CLASS_NAME = 'ToscaAutoscaling'
+HEAT_TEMPLATE_BASE = """
+heat_template_version: 2013-05-23
+"""
+ALARM_STATISTIC = {'average': 'avg'}
+SCALING_RESOURCES = ["OS::Heat::ScalingPolicy", "OS::Heat::AutoScalingGroup",
+ "OS::Aodh::Alarm"]
+
+
+class ToscaAutoscaling(HotResource):
+ '''Translate TOSCA node type tosca.policies.Scaling'''
+
+ toscatype = 'tosca.policies.Scaling'
+
+ def __init__(self, policy, csar_dir=None):
+ hot_type = "OS::Heat::ScalingPolicy"
+ super(ToscaAutoscaling, self).__init__(policy,
+ type=hot_type,
+ csar_dir=csar_dir)
+ self.policy = policy
+
+ def handle_expansion(self):
+ if self.policy.entity_tpl.get('triggers'):
+ sample = self.policy.\
+ entity_tpl["triggers"]["resize_compute"]["condition"]
+ prop = {}
+ prop["description"] = self.policy.entity_tpl.get('description')
+ prop["meter_name"] = "cpu_util"
+ if sample:
+ prop["statistic"] = ALARM_STATISTIC[sample["method"]]
+ prop["period"] = sample["period"]
+ prop["threshold"] = sample["evaluations"]
+ prop["comparison_operator"] = "gt"
+ alarm_name = self.name.replace('_scale_in', '').\
+ replace('_scale_out', '')
+ ceilometer_resources = HotResource(self.nodetemplate,
+ type='OS::Aodh::Alarm',
+ name=alarm_name + '_alarm',
+ properties=prop)
+ hot_resources = [ceilometer_resources]
+ return hot_resources
+
+ def represent_ordereddict(self, dumper, data):
+ nodes = []
+ for key, value in data.items():
+ node_key = dumper.represent_data(key)
+ node_value = dumper.represent_data(value)
+ nodes.append((node_key, node_value))
+ return yaml.nodes.MappingNode(u'tag:yaml.org,2002:map', nodes)
+
+ def _handle_nested_template(self, scale_res):
+ template_dict = yaml.safe_load(HEAT_TEMPLATE_BASE)
+ template_dict['description'] = 'Tacker Scaling template'
+ template_dict["resources"] = {}
+ dict_res = OrderedDict()
+ for res in scale_res:
+ dict_res = res.get_dict_output()
+ res_name = list(dict_res.keys())[0]
+ template_dict["resources"][res_name] = \
+ dict_res[res_name]
+
+ yaml.add_representer(OrderedDict, self.represent_ordereddict)
+ yaml.add_representer(dict, self.represent_ordereddict)
+ yaml_string = yaml.dump(template_dict, default_flow_style=False)
+ yaml_string = yaml_string.replace('\'', '') .replace('\n\n', '\n')
+ self.nested_template = {
+ self.policy.name + '_res.yaml': yaml_string
+ }
+
+ def handle_properties(self, resources):
+ self.properties = {}
+ self.properties["auto_scaling_group_id"] = {
+ 'get_resource': self.policy.name + '_group'
+ }
+ self.properties["adjustment_type"] = "change_in_capacity "
+ self.properties["scaling_adjustment"] = self.\
+ policy.entity_tpl["properties"]["increment"]
+ delete_res_names = []
+ scale_res = []
+ for index, resource in enumerate(resources):
+ if resource.name in self.policy.targets and \
+ resource.type != 'OS::Heat::AutoScalingGroup':
+ temp = self.policy.entity_tpl["properties"]
+ props = {}
+ res = {}
+ res["min_size"] = temp["min_instances"]
+ res["max_size"] = temp["max_instances"]
+ res["desired_capacity"] = temp["default_instances"]
+ props['type'] = resource.type
+ props['properties'] = resource.properties
+ res['resource'] = {'type': self.policy.name + '_res.yaml'}
+ scaling_resources = \
+ HotResource(resource,
+ type='OS::Heat::AutoScalingGroup',
+ name=self.policy.name + '_group',
+ properties=res)
+
+ if resource.type not in SCALING_RESOURCES:
+ delete_res_names.append(resource.name)
+ scale_res.append(resource)
+ self._handle_nested_template(scale_res)
+ resources = [tmp_res
+ for tmp_res in resources
+ if tmp_res.name not in delete_res_names]
+ resources.append(scaling_resources)
+ return resources
+
+ def extract_substack_templates(self, base_filename, hot_template_version):
+ return self.nested_template
+
+ def embed_substack_templates(self, hot_template_version):
+ pass
toscatype = 'tosca.nodes.SoftwareComponent'
- def __init__(self, nodetemplate):
- super(ToscaSoftwareComponent, self).__init__(nodetemplate)
+ def __init__(self, nodetemplate, csar_dir=None):
+ super(ToscaSoftwareComponent, self).__init__(nodetemplate,
+ csar_dir=csar_dir)
pass
def handle_properties(self):
toscatype = 'tosca.nodes.WebApplication'
- def __init__(self, nodetemplate):
- super(ToscaWebApplication, self).__init__(nodetemplate)
+ def __init__(self, nodetemplate, csar_dir=None):
+ super(ToscaWebApplication, self).__init__(nodetemplate,
+ csar_dir=csar_dir)
pass
def handle_properties(self):
toscatype = 'tosca.nodes.WebServer'
- def __init__(self, nodetemplate):
- super(ToscaWebserver, self).__init__(nodetemplate)
+ def __init__(self, nodetemplate, csar_dir):
+ super(ToscaWebserver, self).__init__(nodetemplate,
+ csar_dir=csar_dir)
pass
def handle_properties(self):
# under the License.
import logging
+import six
from toscaparser.utils.gettextutils import _
from translator.hot.syntax.hot_template import HotTemplate
from translator.hot.translate_inputs import TranslateInputs
class TOSCATranslator(object):
'''Invokes translation methods.'''
- def __init__(self, tosca, parsed_params, deploy=None):
+ def __init__(self, tosca, parsed_params, deploy=None, csar_dir=None):
super(TOSCATranslator, self).__init__()
self.tosca = tosca
self.hot_template = HotTemplate()
self.parsed_params = parsed_params
self.deploy = deploy
+ self.csar_dir = csar_dir
self.node_translator = None
log.info(_('Initialized parmaters for translation.'))
- def translate(self):
+ def _translate_to_hot_yaml(self):
self._resolve_input()
self.hot_template.description = self.tosca.description
self.hot_template.parameters = self._translate_inputs()
self.node_translator = TranslateNodeTemplates(self.tosca,
- self.hot_template)
- self.hot_template.resources = self.node_translator.translate()
+ self.hot_template,
+ csar_dir=self.csar_dir)
+ self.hot_template.resources = \
+ self.node_translator.translate()
self.hot_template.outputs = self._translate_outputs()
- return self.hot_template.output_to_yaml()
+ if self.node_translator.hot_template_version is None:
+ self.node_translator.hot_template_version = HotTemplate.LATEST
+
+ def translate(self):
+ """Translate to HOT YAML
+
+ This method produces a translated output for main template.
+ The nested template, if any referenced by main, will be created
+ as a separate file.
+ """
+ self._translate_to_hot_yaml()
+
+ # TODO(mvelten) go back to calling hot_template.output_to_yaml instead
+ # for stdout once embed_substack_templates is correctly implemented
+ # return self.hot_template.output_to_yaml(
+ # self.node_translator.hot_template_version)
+ yaml_files = self.hot_template.output_to_yaml_files_dict(
+ "output.yaml",
+ self.node_translator.hot_template_version)
+ for name, content in six.iteritems(yaml_files):
+ if name != "output.yaml":
+ with open(name, 'w+') as f:
+ f.write(content)
+
+ return yaml_files["output.yaml"]
+
+ def translate_to_yaml_files_dict(self, base_filename):
+ """Translate to HOT YAML
+
+ This method produces a translated output containing main and
+ any nested templates referenced by main. This output can be
+ programmatically stored into different files by using key as
+ template name and value as template content.
+ """
+ self._translate_to_hot_yaml()
+ return self.hot_template.output_to_yaml_files_dict(
+ base_filename,
+ self.node_translator.hot_template_version)
def _translate_inputs(self):
translator = TranslateInputs(self.tosca.inputs, self.parsed_params,
# License for the specific language governing permissions and limitations
# under the License.
+import copy
import importlib
import logging
import os
import six
+from collections import OrderedDict
+from toscaparser.functions import Concat
from toscaparser.functions import GetAttribute
from toscaparser.functions import GetInput
+from toscaparser.functions import GetOperationOutput
from toscaparser.functions import GetProperty
from toscaparser.properties import Property
from toscaparser.relationship_template import RelationshipTemplate
from translator.common.exception import ToscaClassAttributeError
from translator.common.exception import ToscaClassImportError
from translator.common.exception import ToscaModImportError
+from translator.common.exception import UnsupportedTypeError
+from translator.common import utils
from translator.conf.config import ConfigProvider as translatorConfig
from translator.hot.syntax.hot_resource import HotResource
from translator.hot.tosca.tosca_block_storage_attachment import (
TOSCA_TO_HOT_TYPE = _generate_type_map()
+BASE_TYPES = six.string_types + six.integer_types + (dict, OrderedDict)
+
+HOT_SCALING_POLICY_TYPE = ["OS::Heat::AutoScalingGroup",
+ "OS::Senlin::Profile"]
+
class TranslateNodeTemplates(object):
'''Translate TOSCA NodeTemplates to Heat Resources.'''
- def __init__(self, tosca, hot_template):
+ def __init__(self, tosca, hot_template, csar_dir=None):
self.tosca = tosca
self.nodetemplates = self.tosca.nodetemplates
self.hot_template = hot_template
+ self.csar_dir = csar_dir
# list of all HOT resources generated
self.hot_resources = []
# mapping between TOSCA nodetemplate and HOT resource
log.debug(_('Mapping between TOSCA nodetemplate and HOT resource.'))
self.hot_lookup = {}
self.policies = self.tosca.topology_template.policies
+ # stores the last deploy of generated behavior for a resource
+ # useful to satisfy underlying dependencies between interfaces
+ self.last_deploy_map = {}
+ self.hot_template_version = None
+ self.processed_policy_res = []
def translate(self):
return self._translate_nodetemplates()
if resource.type == "OS::Nova::ServerGroup":
resource.handle_properties(self.hot_resources)
+ elif resource.type in ("OS::Heat::ScalingPolicy",
+ "OS::Senlin::Policy"):
+ if resource.name in self.processed_policy_res:
+ return
+ self.processed_policy_res.append(resource.name)
+ self.hot_resources = \
+ resource.handle_properties(self.hot_resources)
+ extra_hot_resources = []
+ for res in self.hot_resources:
+ if res.type == 'OS::Heat::ScalingPolicy':
+ extra_res = copy.deepcopy(res)
+ scaling_adjustment = res.properties['scaling_adjustment']
+ if scaling_adjustment < 0:
+ res.name = res.name + '_scale_in'
+ extra_res.name = extra_res.name + '_scale_out'
+ extra_res.properties['scaling_adjustment'] = \
+ -1 * scaling_adjustment
+ extra_hot_resources.append(extra_res)
+ self.processed_policy_res.append(res.name)
+ self.processed_policy_res.append(extra_res.name)
+ elif scaling_adjustment > 0:
+ res.name = res.name + '_scale_out'
+ extra_res.name = extra_res.name + '_scale_in'
+ extra_res.properties['scaling_adjustment'] = \
+ -1 * scaling_adjustment
+ extra_hot_resources.append(extra_res)
+ self.processed_policy_res.append(res.name)
+ self.processed_policy_res.append(extra_res.name)
+ else:
+ continue
+ self.hot_resources += extra_hot_resources
else:
resource.handle_properties()
def _translate_nodetemplates(self):
-
log.debug(_('Translating the node templates.'))
suffix = 0
# Copy the TOSCA graph: nodetemplate
for node in self.nodetemplates:
- base_type = HotResource.get_base_type(node.type_definition)
- hot_node = TOSCA_TO_HOT_TYPE[base_type.type](node)
+ base_type = HotResource.get_base_type_str(node.type_definition)
+ if base_type not in TOSCA_TO_HOT_TYPE:
+ raise UnsupportedTypeError(type=_('%s') % base_type)
+ hot_node = TOSCA_TO_HOT_TYPE[base_type](node,
+ csar_dir=self.csar_dir)
self.hot_resources.append(hot_node)
self.hot_lookup[node] = hot_node
# BlockStorage Attachment is a special case,
# which doesn't match to Heat Resources 1 to 1.
- if base_type.type == "tosca.nodes.Compute":
+ if base_type == "tosca.nodes.Compute":
volume_name = None
requirements = node.requirements
if requirements:
"tosca.nodes.BlockStorage"):
volume_name = node_name
break
- else: # unreachable code !
+ else:
for n in self.nodetemplates:
if n.name == value and \
n.is_derived_from(
break
suffix = suffix + 1
- attachment_node = self._get_attachment_node(node,
- suffix,
- volume_name)
+ attachment_node = self._get_attachment_node(
+ node, suffix, volume_name)
if attachment_node:
self.hot_resources.append(attachment_node)
for i in self.tosca.inputs:
for policy in self.policies:
policy_type = policy.type_definition
+ if policy.is_derived_from('tosca.policies.Scaling') and \
+ policy_type.type != 'tosca.policies.Scaling.Cluster':
+ TOSCA_TO_HOT_TYPE[policy_type.type] = \
+ TOSCA_TO_HOT_TYPE['tosca.policies.Scaling']
+ if not policy.is_derived_from('tosca.policies.Scaling') and \
+ policy_type.type not in TOSCA_TO_HOT_TYPE:
+ raise UnsupportedTypeError(type=_('%s') % policy_type.type)
+ elif policy_type.type == 'tosca.policies.Scaling.Cluster':
+ self.hot_template_version = '2016-04-08'
policy_node = TOSCA_TO_HOT_TYPE[policy_type.type](policy)
self.hot_resources.append(policy_node)
# into multiple HOT resources and may change their name
lifecycle_resources = []
for resource in self.hot_resources:
- expanded = resource.handle_life_cycle()
- if expanded:
- lifecycle_resources += expanded
+ expanded_resources, deploy_lookup, last_deploy = resource.\
+ handle_life_cycle()
+ if expanded_resources:
+ lifecycle_resources += expanded_resources
+ if deploy_lookup:
+ self.hot_lookup.update(deploy_lookup)
+ if last_deploy:
+ self.last_deploy_map[resource] = last_deploy
self.hot_resources += lifecycle_resources
# Handle configuration from ConnectsTo relationship in the TOSCA node:
connectsto_resources = []
for node in self.nodetemplates:
for requirement in node.requirements:
- for endpoint, details in six.iteritems(requirement):
+ for endpoint, details in requirement.items():
relation = None
if isinstance(details, dict):
target = details.get('node')
# if the source of dependency is a server and the
# relationship type is 'tosca.relationships.HostedOn',
# add dependency as properties.server
- if node_depend.type == 'tosca.nodes.Compute' and \
+ base_type = HotResource.get_base_type_str(
+ node_depend.type_definition)
+ if base_type == 'tosca.nodes.Compute' and \
node.related[node_depend].type == \
node.type_definition.HOSTEDON:
self.hot_lookup[node].properties['server'] = \
self.hot_lookup[node].depends_on_nodes.append(
self.hot_lookup[node_depend].top_of_chain())
+ last_deploy = self.last_deploy_map.get(
+ self.hot_lookup[node_depend])
+ if last_deploy and \
+ last_deploy not in self.hot_lookup[node].depends_on:
+ self.hot_lookup[node].depends_on.append(last_deploy)
+ self.hot_lookup[node].depends_on_nodes.append(last_deploy)
+
# handle hosting relationship
for resource in self.hot_resources:
resource.handle_hosting()
# dependent nodes in correct order
self.processed_resources = []
for resource in self.hot_resources:
- self._recursive_handle_properties(resource)
+ if resource.type not in HOT_SCALING_POLICY_TYPE:
+ self._recursive_handle_properties(resource)
# handle resources that need to expand to more than one HOT resource
expansion_resources = []
# traverse the reference chain to get the actual value
inputs = resource.properties.get('input_values')
if inputs:
- for name, value in six.iteritems(inputs):
- inputs[name] = self._translate_input(value, resource)
+ for name, value in inputs.items():
+ inputs[name] = self.translate_param_value(value, resource)
+
+ # remove resources without type defined
+ # for example a SoftwareComponent without interfaces
+ # would fall in this case
+ to_remove = []
+ for resource in self.hot_resources:
+ if resource.type is None:
+ to_remove.append(resource)
+
+ for resource in to_remove:
+ self.hot_resources.remove(resource)
return self.hot_resources
- def _translate_input(self, input_value, resource):
+ def translate_param_value(self, param_value, resource):
+ tosca_template = None
+ if resource:
+ tosca_template = resource.nodetemplate
+
get_property_args = None
- if isinstance(input_value, GetProperty):
- get_property_args = input_value.args
+ if isinstance(param_value, GetProperty):
+ get_property_args = param_value.args
# to remove when the parser is fixed to return GetProperty
- if isinstance(input_value, dict) and 'get_property' in input_value:
- get_property_args = input_value['get_property']
+ elif isinstance(param_value, dict) and 'get_property' in param_value:
+ get_property_args = param_value['get_property']
if get_property_args is not None:
- hot_target = self._find_hot_resource_for_tosca(
- get_property_args[0], resource)
- if hot_target:
- props = hot_target.get_tosca_props()
- prop_name = get_property_args[1]
- if prop_name in props:
- return props[prop_name]
- elif isinstance(input_value, GetAttribute):
+ tosca_target, prop_name, prop_arg = \
+ self.decipher_get_operation(get_property_args,
+ tosca_template)
+ if tosca_target:
+ prop_value = tosca_target.get_property_value(prop_name)
+ if prop_value:
+ prop_value = self.translate_param_value(
+ prop_value, resource)
+ return self._unfold_value(prop_value, prop_arg)
+ get_attr_args = None
+ if isinstance(param_value, GetAttribute):
+ get_attr_args = param_value.result().args
+ # to remove when the parser is fixed to return GetAttribute
+ elif isinstance(param_value, dict) and 'get_attribute' in param_value:
+ get_attr_args = param_value['get_attribute']
+ if get_attr_args is not None:
# for the attribute
# get the proper target type to perform the translation
- args = input_value.result().args
- hot_target = self._find_hot_resource_for_tosca(args[0], resource)
-
- return hot_target.get_hot_attribute(args[1], args)
- # most of artifacts logic should move to the parser
- elif isinstance(input_value, dict) and 'get_artifact' in input_value:
- get_artifact_args = input_value['get_artifact']
-
- hot_target = self._find_hot_resource_for_tosca(
- get_artifact_args[0], resource)
- artifacts = TranslateNodeTemplates.get_all_artifacts(
- hot_target.nodetemplate)
-
- if get_artifact_args[1] in artifacts:
- artifact = artifacts[get_artifact_args[1]]
- if artifact.get('type', None) == 'tosca.artifacts.File':
- return {'get_file': artifact.get('file')}
- elif isinstance(input_value, GetInput):
- if isinstance(input_value.args, list) \
- and len(input_value.args) == 1:
- return {'get_param': input_value.args[0]}
+ tosca_target, attr_name, attr_arg = \
+ self.decipher_get_operation(get_attr_args, tosca_template)
+ attr_args = []
+ if attr_arg:
+ attr_args += attr_arg
+ if tosca_target:
+ if tosca_target in self.hot_lookup:
+ attr_value = self.hot_lookup[tosca_target].\
+ get_hot_attribute(attr_name, attr_args)
+ attr_value = self.translate_param_value(
+ attr_value, resource)
+ return self._unfold_value(attr_value, attr_arg)
+ elif isinstance(param_value, dict) and 'get_artifact' in param_value:
+ get_artifact_args = param_value['get_artifact']
+ tosca_target, artifact_name, _ = \
+ self.decipher_get_operation(get_artifact_args,
+ tosca_template)
+
+ if tosca_target:
+ artifacts = HotResource.get_all_artifacts(tosca_target)
+ if artifact_name in artifacts:
+ cwd = os.getcwd()
+ artifact = artifacts[artifact_name]
+ if self.csar_dir:
+ os.chdir(self.csar_dir)
+ get_file = os.path.abspath(artifact.get('file'))
+ else:
+ get_file = artifact.get('file')
+ if artifact.get('type', None) == 'tosca.artifacts.File':
+ return {'get_file': get_file}
+ os.chdir(cwd)
+ get_input_args = None
+ if isinstance(param_value, GetInput):
+ get_input_args = param_value.args
+ elif isinstance(param_value, dict) and 'get_input' in param_value:
+ get_input_args = param_value['get_input']
+ if get_input_args is not None:
+ if isinstance(get_input_args, list) \
+ and len(get_input_args) == 1:
+ return {'get_param': self.translate_param_value(
+ get_input_args[0], resource)}
else:
- return {'get_param': input_value.args}
+ return {'get_param': self.translate_param_value(
+ get_input_args, resource)}
+ elif isinstance(param_value, GetOperationOutput):
+ res = self._translate_get_operation_output_function(
+ param_value.args, tosca_template)
+ if res:
+ return res
+ elif isinstance(param_value, dict) \
+ and 'get_operation_output' in param_value:
+ res = self._translate_get_operation_output_function(
+ param_value['get_operation_output'], tosca_template)
+ if res:
+ return res
+ concat_list = None
+ if isinstance(param_value, Concat):
+ concat_list = param_value.args
+ elif isinstance(param_value, dict) and 'concat' in param_value:
+ concat_list = param_value['concat']
+ if concat_list is not None:
+ res = self._translate_concat_function(concat_list, resource)
+ if res:
+ return res
+
+ if isinstance(param_value, list):
+ translated_list = []
+ for elem in param_value:
+ translated_elem = self.translate_param_value(elem, resource)
+ if translated_elem:
+ translated_list.append(translated_elem)
+ return translated_list
+
+ if isinstance(param_value, BASE_TYPES):
+ return param_value
+
+ return None
- return input_value
+ def _translate_concat_function(self, concat_list, resource):
+ str_replace_template = ''
+ str_replace_params = {}
+ index = 0
+ for elem in concat_list:
+ str_replace_template += '$s' + str(index)
+ str_replace_params['$s' + str(index)] = \
+ self.translate_param_value(elem, resource)
+ index += 1
+
+ return {'str_replace': {
+ 'template': str_replace_template,
+ 'params': str_replace_params
+ }}
+
+ def _translate_get_operation_output_function(self, args, tosca_template):
+ tosca_target = self._find_tosca_node(args[0],
+ tosca_template)
+ if tosca_target and len(args) >= 4:
+ operations = HotResource.get_all_operations(tosca_target)
+ # ignore Standard interface name,
+ # it is the only one supported in the translator anyway
+ op_name = args[2]
+ output_name = args[3]
+ if op_name in operations:
+ operation = operations[op_name]
+ if operation in self.hot_lookup:
+ matching_deploy = self.hot_lookup[operation]
+ matching_config_name = matching_deploy.\
+ properties['config']['get_resource']
+ matching_config = self.find_hot_resource(
+ matching_config_name)
+ if matching_config:
+ outputs = matching_config.properties.get('outputs')
+ if outputs is None:
+ outputs = []
+ outputs.append({'name': output_name})
+ matching_config.properties['outputs'] = outputs
+ return {'get_attr': [
+ matching_deploy.name,
+ output_name
+ ]}
@staticmethod
- def get_all_artifacts(nodetemplate):
- artifacts = nodetemplate.type_definition.get_value('artifacts',
- parent=True)
- if not artifacts:
- artifacts = {}
- tpl_artifacts = nodetemplate.entity_tpl.get('artifacts')
- if tpl_artifacts:
- artifacts.update(tpl_artifacts)
+ def _unfold_value(value, value_arg):
+ if value_arg is not None:
+ if isinstance(value, dict):
+ val = value.get(value_arg)
+ if val is not None:
+ return val
+
+ index = utils.str_to_num(value_arg)
+ if isinstance(value, list) and index is not None:
+ return value[index]
+ return value
+
+ def decipher_get_operation(self, args, current_tosca_node):
+ tosca_target = self._find_tosca_node(args[0],
+ current_tosca_node)
+ new_target = None
+ if tosca_target and len(args) > 2:
+ cap_or_req_name = args[1]
+ cap = tosca_target.get_capability(cap_or_req_name)
+ if cap:
+ new_target = cap
+ else:
+ for req in tosca_target.requirements:
+ if cap_or_req_name in req:
+ new_target = self._find_tosca_node(
+ req[cap_or_req_name])
+ cap = new_target.get_capability(cap_or_req_name)
+ if cap:
+ new_target = cap
+ break
+
+ if new_target:
+ tosca_target = new_target
+
+ prop_name = args[2]
+ prop_arg = args[3] if len(args) >= 4 else None
+ else:
+ prop_name = args[1]
+ prop_arg = args[2] if len(args) >= 3 else None
- return artifacts
+ return tosca_target, prop_name, prop_arg
def _get_attachment_node(self, node, suffix, volume_name):
attach = False
if resource.name == name:
return resource
- def _find_tosca_node(self, tosca_name):
- for node in self.nodetemplates:
- if node.name == tosca_name:
- return node
-
- def _find_hot_resource_for_tosca(self, tosca_name,
- current_hot_resource=None):
+ def _find_tosca_node(self, tosca_name, current_tosca_template=None):
+ tosca_node = None
if tosca_name == 'SELF':
- return current_hot_resource
- if tosca_name == 'HOST' and current_hot_resource is not None:
- for req in current_hot_resource.nodetemplate.requirements:
+ tosca_node = current_tosca_template
+ if tosca_name == 'HOST' and current_tosca_template:
+ for req in current_tosca_template.requirements:
if 'host' in req:
- return self._find_hot_resource_for_tosca(req['host'])
+ tosca_node = self._find_tosca_node(req['host'])
- for node in self.nodetemplates:
- if node.name == tosca_name:
- return self.hot_lookup[node]
+ if tosca_node is None:
+ for node in self.nodetemplates:
+ if node.name == tosca_name:
+ tosca_node = node
+ break
+ return tosca_node
+
+ def _find_hot_resource_for_tosca(self, tosca_name,
+ current_hot_resource=None):
+ current_tosca_resource = current_hot_resource.nodetemplate \
+ if current_hot_resource else None
+ tosca_node = self._find_tosca_node(tosca_name, current_tosca_resource)
+ if tosca_node:
+ return self.hot_lookup[tosca_node]
return None
connect_interfaces):
connectsto_resources = []
if connect_interfaces:
- for iname, interface in six.iteritems(connect_interfaces):
+ for iname, interface in connect_interfaces.items():
connectsto_resources += \
self._create_connect_config(source_node, target_name,
interface)
raise Exception(msg)
config_name = source_node.name + '_' + target_name + '_connect_config'
implement = connect_config.get('implementation')
+ cwd = os.getcwd()
if config_location == 'target':
+ if self.csar_dir:
+ os.chdir(self.csar_dir)
+ get_file = os.path.abspath(implement)
+ else:
+ get_file = implement
hot_config = HotResource(target_node,
config_name,
'OS::Heat::SoftwareConfig',
- {'config': {'get_file': implement}})
+ {'config': {'get_file': get_file}},
+ csar_dir=self.csar_dir)
elif config_location == 'source':
+ if self.csar_dir:
+ os.chdir(self.csar_dir)
+ get_file = os.path.abspath(implement)
+ else:
+ get_file = implement
hot_config = HotResource(source_node,
config_name,
'OS::Heat::SoftwareConfig',
- {'config': {'get_file': implement}})
+ {'config': {'get_file': get_file}},
+ csar_dir=self.csar_dir)
+ os.chdir(cwd)
connectsto_resources.append(hot_config)
hot_target = self._find_hot_resource_for_tosca(target_name)
hot_source = self._find_hot_resource_for_tosca(source_node.name)
def _translate_outputs(self):
hot_outputs = []
for output in self.outputs:
- if output.value.name == 'get_attribute':
- get_parameters = output.value.args
- hot_target = self.nodes.find_hot_resource(get_parameters[0])
- hot_value = hot_target.get_hot_attribute(get_parameters[1],
- get_parameters)
- hot_outputs.append(HotOutput(output.name,
- hot_value,
- output.description))
- else:
- hot_outputs.append(HotOutput(output.name,
- output.value,
+ hot_value = self.nodes.translate_param_value(output.value, None)
+ if hot_value is not None:
+ hot_outputs.append(HotOutput(output.name, hot_value,
output.description))
return hot_outputs
import sys
+import mock
+
class FakeApp(object):
def __init__(self):
self.stdout = sys.stdout
self.stderr = sys.stderr
+ self.cloud = mock.Mock()
+ self.cloud.get_session.return_value = None
+
class FakeClientManager(object):
def __init__(self):
from toscaparser.tosca_template import ToscaTemplate
from toscaparser.utils.gettextutils import _
+from translator.common import flavors
+from translator.common import images
from translator.common.utils import UrlUtils
from translator.conf.config import ConfigProvider
from translator.hot.tosca_translator import TOSCATranslator
"""Translate a template"""
- auth_required = False
+ auth_required = True
def get_parser(self, prog_name):
parser = super(TranslateTemplate, self).get_parser(prog_name)
'(%s).'), parsed_args)
output = None
+ session = self.app.cloud.get_session()
+ flavors.SESSION = session
+ images.SESSION = session
+
if parsed_args.parameter:
parsed_params = parsed_args.parameter
else:
translator = TOSCATranslator(tosca, parsed_params)
output = translator.translate()
else:
- msg = _('Could not find template file.')
+ msg = _('Could not find template file.\n')
log.error(msg)
sys.stdout.write(msg)
raise SystemExit
import argparse
-import ast
-import json
+import codecs
import logging
import logging.config
import os
-import prettytable
-import requests
+import six
import sys
import uuid
import yaml
+import zipfile
+
+# NOTE(aloga): As per upstream developers requirement this needs to work
+# without the clients, therefore we need to pass if we cannot import them
+try:
+ from keystoneauth1 import loading
+except ImportError:
+ keystone_client_avail = False
+else:
+ keystone_client_avail = True
+
+try:
+ import heatclient.client
+except ImportError:
+ heat_client_avail = False
+else:
+ heat_client_avail = True
+
from toscaparser.tosca_template import ToscaTemplate
from toscaparser.utils.gettextutils import _
from toscaparser.utils.urlutils import UrlUtils
+from translator.common import flavors
+from translator.common import images
from translator.common import utils
from translator.conf.config import ConfigProvider
from translator.hot.tosca_translator import TOSCATranslator
--template-type=<type of template e.g. tosca>
--parameters="purpose=test"
Takes three user arguments,
-1. type of translation (e.g. tosca) (required)
-2. Path to the file that needs to be translated (required)
+1. Path to the file that needs to be translated (required)
+2. type of translation (e.g. tosca) (optional)
3. Input parameters (optional)
In order to use heat-translator to only validate template,
class TranslatorShell(object):
SUPPORTED_TYPES = ['tosca']
+ TOSCA_CSAR_META_DIR = "TOSCA-Metadata"
- def get_parser(self):
+ def get_parser(self, argv):
parser = argparse.ArgumentParser(prog="heat-translator")
parser.add_argument('--template-file',
parser.add_argument('--stack-name',
metavar='<stack-name>',
required=False,
- help=_('Stack name when deploy the generated '
- 'template.'))
+ help=_('The name to use for the Heat stack when '
+ 'deploy the generated template.'))
+
+ self._append_global_identity_args(parser, argv)
return parser
+ def _append_global_identity_args(self, parser, argv):
+ if not keystone_client_avail:
+ return
+
+ loading.register_session_argparse_arguments(parser)
+
+ default_auth_plugin = 'password'
+ if 'os-token' in argv:
+ default_auth_plugin = 'token'
+ loading.register_auth_argparse_arguments(
+ parser, argv, default=default_auth_plugin)
+
def main(self, argv):
- parser = self.get_parser()
+ parser = self.get_parser(argv)
(args, args_list) = parser.parse_known_args(argv)
template_file = args.template_file
'validation.') % {'template_file': template_file})
print(msg)
else:
- heat_tpl = self._translate(template_type, template_file,
- parsed_params, a_file, deploy)
- if heat_tpl:
- if utils.check_for_env_variables() and deploy:
- try:
- file_name = os.path.basename(
- os.path.splitext(template_file)[0])
- heatclient(heat_tpl, stack_name,
- file_name, parsed_params)
- except Exception:
- log.error(_("Unable to launch the heat stack"))
-
- self._write_output(heat_tpl, output_file)
+ if keystone_client_avail:
+ try:
+ keystone_auth = (
+ loading.load_auth_from_argparse_arguments(args)
+ )
+ keystone_session = (
+ loading.load_session_from_argparse_arguments(
+ args,
+ auth=keystone_auth
+ )
+ )
+ images.SESSION = keystone_session
+ flavors.SESSION = keystone_session
+ except Exception:
+ keystone_session = None
+
+ translator = self._get_translator(template_type,
+ template_file,
+ parsed_params, a_file,
+ deploy)
+
+ if translator and deploy:
+ if not keystone_client_avail or not heat_client_avail:
+ raise RuntimeError(_('Could not find Heat or Keystone'
+ 'client to deploy, aborting '))
+ if not keystone_session:
+ raise RuntimeError(_('Impossible to login with '
+ 'Keystone to deploy on Heat, '
+ 'please check your credentials'))
+
+ file_name = os.path.basename(
+ os.path.splitext(template_file)[0])
+ self.deploy_on_heat(keystone_session, keystone_auth,
+ translator, stack_name, file_name,
+ parsed_params)
+
+ self._write_output(translator, output_file)
else:
msg = (_('The path %(template_file)s is not a valid '
'file or URL.') % {'template_file': template_file})
log.error(msg)
raise ValueError(msg)
+ def deploy_on_heat(self, session, auth, translator,
+ stack_name, file_name, parameters):
+ endpoint = auth.get_endpoint(session, service_type="orchestration")
+ heat_client = heatclient.client.Client('1',
+ session=session,
+ auth=auth,
+ endpoint=endpoint)
+
+ heat_stack_name = stack_name if stack_name else \
+ 'heat_' + file_name + '_' + str(uuid.uuid4()).split("-")[0]
+ msg = _('Deploy the generated template, the stack name is %(name)s.')\
+ % {'name': heat_stack_name}
+ log.debug(msg)
+ tpl = yaml.safe_load(translator.translate())
+
+ # get all the values for get_file from a translated template
+ get_files = []
+ utils.get_dict_value(tpl, "get_file", get_files)
+ files = {}
+ if get_files:
+ for file in get_files:
+ with codecs.open(file, encoding='utf-8', errors='strict') \
+ as f:
+ text = f.read()
+ files[file] = text
+ tpl['heat_template_version'] = str(tpl['heat_template_version'])
+ self._create_stack(heat_client=heat_client,
+ stack_name=heat_stack_name,
+ template=tpl,
+ parameters=parameters,
+ files=files)
+
+ def _create_stack(self, heat_client, stack_name, template, parameters,
+ files):
+ if heat_client:
+ heat_client.stacks.create(stack_name=stack_name,
+ template=template,
+ parameters=parameters,
+ files=files)
+
def _parse_parameters(self, parameter_list):
parsed_inputs = {}
raise ValueError(msg)
return parsed_inputs
- def _translate(self, sourcetype, path, parsed_params, a_file, deploy):
- output = None
+ def _get_translator(self, sourcetype, path, parsed_params, a_file, deploy):
if sourcetype == "tosca":
log.debug(_('Loading the tosca template.'))
tosca = ToscaTemplate(path, parsed_params, a_file)
- translator = TOSCATranslator(tosca, parsed_params, deploy)
+ csar_dir = None
+ if deploy and zipfile.is_zipfile(path):
+ # set CSAR directory to the root of TOSCA-Metadata
+ csar_decompress = utils.decompress(path)
+ csar_dir = os.path.join(csar_decompress,
+ self.TOSCA_CSAR_META_DIR)
+ msg = _("'%(csar)s' is the location of decompressed "
+ "CSAR file.") % {'csar': csar_dir}
+ log.info(msg)
+ translator = TOSCATranslator(tosca, parsed_params, deploy,
+ csar_dir=csar_dir)
log.debug(_('Translating the tosca template.'))
- output = translator.translate()
- return output
-
- def _write_output(self, output, output_file=None):
- if output:
- if output_file:
- with open(output_file, 'w+') as f:
- f.write(output)
- else:
- print(output)
-
-
-def heatclient(output, stack_name, file_name, params):
- try:
- access_dict = utils.get_ks_access_dict()
- endpoint = utils.get_url_for(access_dict, 'orchestration')
- token = utils.get_token_id(access_dict)
- except Exception as e:
- log.error(e)
- headers = {
- 'Content-Type': 'application/json',
- 'X-Auth-Token': token
- }
-
- heat_stack_name = stack_name if stack_name else \
- "heat_" + file_name + '_' + str(uuid.uuid4()).split("-")[0]
- output = yaml.load(output)
- output['heat_template_version'] = str(output['heat_template_version'])
- data = {
- 'stack_name': heat_stack_name,
- 'template': output,
- 'parameters': params
- }
- response = requests.post(endpoint + '/stacks',
- data=json.dumps(data),
- headers=headers)
- content = ast.literal_eval(response._content)
- if response.status_code == 201:
- stack_id = content["stack"]["id"]
- get_url = endpoint + '/stacks/' + heat_stack_name + '/' + stack_id
- get_stack_response = requests.get(get_url,
- headers=headers)
- stack_details = json.loads(get_stack_response.content)["stack"]
- col_names = ["id", "stack_name", "stack_status", "creation_time",
- "updated_time"]
- pt = prettytable.PrettyTable(col_names)
- stack_list = []
- for col in col_names:
- stack_list.append(stack_details[col])
- pt.add_row(stack_list)
- print(pt)
- else:
- err_msg = content["error"]["message"]
- log(_("Unable to deploy to Heat\n%s\n") % err_msg)
+ return translator
+
+ def _write_output(self, translator, output_file=None):
+ if output_file:
+ path, filename = os.path.split(output_file)
+ yaml_files = translator.translate_to_yaml_files_dict(filename)
+ for name, content in six.iteritems(yaml_files):
+ with open(os.path.join(path, name), 'w+') as f:
+ f.write(content)
+ else:
+ print(translator.translate())
def main(args=None):
--- /dev/null
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: >
+ Template for deploying servers based on policies.
+
+topology_template:
+ node_templates:
+ my_server_1:
+ type: tosca.nodes.Compute
+ capabilities:
+ host:
+ properties:
+ num_cpus: 2
+ disk_size: 10 GB
+ mem_size: 512 MB
+ os:
+ properties:
+ # host Operating System image properties
+ architecture: x86_64
+ type: Linux
+ distribution: RHEL
+ version: 6.5
+ policies:
+ - asg:
+ type: tosca.policies.Scaling
+ description: Simple node autoscaling
+ targets: [my_server_1]
+ triggers:
+ resize_compute:
+ description: trigger
+ condition:
+ constraint: utilization greater_than 50%
+ period: 60
+ evaluations: 1
+ method: average
+ properties:
+ min_instances: 2
+ max_instances: 10
+ default_instances: 3
+ increment: 1
--- /dev/null
+tosca_definitions_version: tosca_simple_yaml_1_0\r
+\r
+description: >\r
+ Template for deploying servers based on policies.\r
+\r
+imports:\r
+ - ../custom_types/senlin_cluster_policies.yaml\r
+\r
+topology_template:\r
+ node_templates:\r
+ my_server_1:\r
+ type: tosca.nodes.Compute\r
+ capabilities:\r
+ host:\r
+ properties:\r
+ num_cpus: 2\r
+ disk_size: 10 GB\r
+ mem_size: 512 MB\r
+ os:\r
+ properties:\r
+ # host Operating System image properties\r
+ architecture: x86_64\r
+ type: Linux\r
+ distribution: RHEL\r
+ version: 6.5\r
+ my_port_1:\r
+ type: tosca.nodes.network.Port\r
+ requirements:\r
+ - link:\r
+ node: my_network_1\r
+ - binding:\r
+ node: my_server_1\r
+ my_network_1:\r
+ type: tosca.nodes.network.Network\r
+ properties:\r
+ network_name: net0\r
+ policies:\r
+ - cluster_scaling:\r
+ type: tosca.policies.Scaling.Cluster\r
+ description: Cluster node autoscaling\r
+ targets: [my_server_1]\r
+ triggers:\r
+ scale_out:\r
+ description: trigger\r
+ event_type:\r
+ type: tosca.events.resource.cpu.utilization\r
+ metrics: cpu_util\r
+ implementation: Ceilometer\r
+ condition:\r
+ constraint: utilization greater_than 50%\r
+ period: 60\r
+ evaluations: 1\r
+ method: average\r
+ action:\r
+ scale_out:\r
+ type: SCALE_OUT\r
+ implementation: Senlin.webhook\r
+ properties:\r
+ min_instances: 2\r
+ max_instances: 10\r
+ default_instances: 3\r
+ increment: 1\r
--- /dev/null
+tosca_definitions_version: tosca_simple_yaml_1_0\r
+\r
+description: >\r
+ The TOSCA Policy Type definition that is used to govern\r
+ Senlin Policy of TOSCA nodes or groups of nodes\r
+\r
+policy_types:\r
+ tosca.policies.Scaling.Cluster:\r
+ derived_from: tosca.policies.Scaling\r
+ description: The TOSCA Policy Type definition that is used to govern\r
+ scaling of TOSCA nodes or groups of nodes.
\ No newline at end of file
--- /dev/null
+heat_template_version: 2013-05-23
+description: Tacker Scaling template
+resources:
+ my_server_1:
+ type: OS::Nova::Server
+ properties:
+ flavor: m1.medium
+ user_data_format: SOFTWARE_CONFIG
+ software_config_transport: POLL_SERVER_HEAT
+ image: rhel-6.5-test-image
--- /dev/null
+heat_template_version: 2013-05-23
+
+description: >
+ Template for deploying servers based on policies.
+
+parameters: {}
+resources:
+ asg_group:
+ type: OS::Heat::AutoScalingGroup
+ properties:
+ min_size: 2
+ desired_capacity: 3
+ resource:
+ type: asg_res.yaml
+ max_size: 10
+ asg_scale_out:
+ type: OS::Heat::ScalingPolicy
+ properties:
+ auto_scaling_group_id:
+ get_resource: asg_group
+ adjustment_type: change_in_capacity
+ scaling_adjustment: 1
+ asg_scale_in:
+ type: OS::Heat::ScalingPolicy
+ properties:
+ auto_scaling_group_id:
+ get_resource: asg_group
+ adjustment_type: change_in_capacity
+ scaling_adjustment: -1
+ asg_alarm:
+ type: OS::Aodh::Alarm
+ properties:
+ meter_name: cpu_util
+ description: Simple node autoscaling
+ period: 60
+ statistic: avg
+ threshold: 1
+ comparison_operator: gt
+outputs: {}
\ No newline at end of file
--- /dev/null
+heat_template_version: 2016-04-08\r
+\r
+description: >\r
+ Template for deploying servers based on policies.\r
+\r
+parameters: {}\r
+resources:\r
+ my_server_1:\r
+ type: OS::Senlin::Profile\r
+ properties:\r
+ type: os.nova.server-1.0\r
+ properties:\r
+ flavor: m1.medium\r
+ image: rhel-6.5-test-image\r
+ networks:\r
+ - network: net0\r
+ cluster_scaling_scale_out:\r
+ type: OS::Senlin::Policy\r
+ properties:\r
+ bindings:\r
+ - cluster:\r
+ get_resource: my_server_1_cluster\r
+ type: senlin.policy.scaling-1.0\r
+ properties:\r
+ adjustment:\r
+ type: CHANGE_IN_CAPACITY\r
+ number: 1\r
+ event: CLUSTER_SCALE_OUT\r
+ my_server_1_cluster:\r
+ type: OS::Senlin::Cluster\r
+ properties:\r
+ profile:\r
+ get_resource: my_server_1\r
+ min_size: 2\r
+ max_size: 10\r
+ desired_capacity: 3\r
+ my_server_1_scale_out_receiver:\r
+ type: OS::Senlin::Receiver\r
+ properties:\r
+ action: CLUSTER_SCALE_OUT\r
+ cluster:\r
+ get_resource: my_server_1_cluster\r
+ type: webhook\r
+ scale_out_alarm:\r
+ type: OS::Aodh::Alarm\r
+ properties:\r
+ meter_name: cpu_util\r
+ alarm_actions:\r
+ - get_attr:\r
+ - my_server_1_scale_out_receiver\r
+ - channel\r
+ - alarm_url\r
+ description: Cluster node autoscaling\r
+ evaluation_periods: 1\r
+ repeat_actions: True\r
+ period: 60\r
+ statistic: avg\r
+ threshold: 50\r
+ comparison_operator: gt\r
+outputs: {}\r
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
- TOSCA template to test artifact usage
+ TOSCA template to test file and Ansible Galaxy role artifacts
parameters: {}
resources:
+ customwebserver_install_roles_deploy:
+ type: OS::Heat::SoftwareDeployment
+ properties:
+ config:
+ get_resource: customwebserver_install_roles_config
+ server:
+ get_resource: server
+ signal_transport: HEAT_SIGNAL
+ customwebserver_install_roles_config:
+ type: OS::Heat::SoftwareConfig
+ properties:
+ config: |
+ #!/bin/bash
+ ansible-galaxy install user.role
+ group: script
customwebserver_create_deploy:
type: OS::Heat::SoftwareDeployment
properties:
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA template to test custom type with an interface defined on it
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA template to test custom type with an interface defined on it,
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA template to test custom type with an interface defined on it,
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
This TOSCA simple profile deploys nodejs, mongodb, elasticsearch, logstash and
mongodb_ip:
get_attr:
- mongo_server
- - first_address
+ - networks
+ - private
+ - 0
server:
get_resource: app_server
signal_transport: HEAT_SIGNAL
mongodb_ip:
get_attr:
- mongo_server
- - first_address
+ - networks
+ - private
+ - 0
server:
get_resource: mongo_server
signal_transport: HEAT_SIGNAL
logstash_ip:
get_attr:
- logstash_server
- - first_address
+ - networks
+ - private
+ - 0
server:
get_resource: app_server
signal_transport: HEAT_SIGNAL
logstash_ip:
get_attr:
- logstash_server
- - first_address
+ - networks
+ - private
+ - 0
server:
get_resource: app_server
signal_transport: HEAT_SIGNAL
elasticsearch_ip:
get_attr:
- elasticsearch_server
- - first_address
+ - networks
+ - private
+ - 0
server:
get_resource: logstash_server
signal_transport: HEAT_SIGNAL
elasticsearch_ip:
get_attr:
- elasticsearch_server
- - first_address
+ - networks
+ - private
+ - 0
kibana_ip:
get_attr:
- kibana_server
- - first_address
+ - networks
+ - private
+ - 0
server:
get_resource: kibana_server
signal_transport: HEAT_SIGNAL
value:
get_attr:
- app_server
- - first_address
+ - networks
+ - private
+ - 0
mongodb_url:
description: URL for the mongodb server.
value:
get_attr:
- mongo_server
- - first_address
+ - networks
+ - private
+ - 0
logstash_url:
description: URL for the logstash server.
value:
get_attr:
- logstash_server
- - first_address
+ - networks
+ - private
+ - 0
elasticsearch_url:
description: URL for the elasticsearch server.
value:
get_attr:
- elasticsearch_server
- - first_address
+ - networks
+ - private
+ - 0
kibana_url:
description: URL for the kibana server.
value:
get_attr:
- kibana_server
- - first_address
+ - networks
+ - private
+ - 0
+
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
This TOSCA simple profile deploys nodejs, mongodb, elasticsearch, logstash and
mongodb_ip:
get_attr:
- mongo_server
- - first_address
-
+ - networks
+ - private
+ - 0
server:
get_resource: app_server
signal_transport: HEAT_SIGNAL
mongodb_ip:
get_attr:
- mongo_server
- - first_address
-
+ - networks
+ - private
+ - 0
server:
get_resource: mongo_server
signal_transport: HEAT_SIGNAL
logstash_ip:
get_attr:
- logstash_server
- - first_address
+ - networks
+ - private
+ - 0
server:
get_resource: app_server
signal_transport: HEAT_SIGNAL
logstash_ip:
get_attr:
- logstash_server
- - first_address
+ - networks
+ - private
+ - 0
server:
get_resource: app_server
signal_transport: HEAT_SIGNAL
elasticsearch_ip:
get_attr:
- elasticsearch_server
- - first_address
+ - networks
+ - private
+ - 0
server:
get_resource: logstash_server
signal_transport: HEAT_SIGNAL
elasticsearch_ip:
get_attr:
- elasticsearch_server
- - first_address
+ - networks
+ - private
+ - 0
kibana_ip:
get_attr:
- kibana_server
- - first_address
-
+ - networks
+ - private
+ - 0
server:
get_resource: kibana_server
signal_transport: HEAT_SIGNAL
value:
get_attr:
- app_server
- - first_address
+ - networks
+ - private
+ - 0
mongodb_url:
description: URL for the mongodb server.
value:
get_attr:
- mongo_server
- - first_address
+ - networks
+ - private
+ - 0
logstash_url:
description: URL for the logstash server.
value:
get_attr:
- logstash_server
- - first_address
+ - networks
+ - private
+ - 0
elasticsearch_url:
description: URL for the elasticsearch server.
value:
get_attr:
- elasticsearch_server
- - first_address
+ - networks
+ - private
+ - 0
kibana_url:
description: URL for the kibana server.
value:
get_attr:
- kibana_server
- - first_address
+ - networks
+ - private
+ - 0
+
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA template to test get_operation_output by exchanging ssh public key
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
Template for deploying a server with custom properties for image, flavor and key_name.
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
Template for deploying a server with custom properties for image, flavor and key_name.
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA template to test get_* functions semantic
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
Template for deploying a single server with predefined properties.
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
Template for deploying a single server with predefined properties.
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
A template to test host assignment for translated hot resources.
logstash_ip:
get_attr:
- logstash_server
- - first_address
+ - networks
+ - private
+ - 0
server:
get_resource: app_server
signal_transport: HEAT_SIGNAL
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with nodejs and mongodb.
mongodb_ip:
get_attr:
- mongo_server
- - first_address
+ - networks
+ - private
+ - 0
server:
get_resource: mongo_server
signal_transport: HEAT_SIGNAL
mongodb_ip:
get_attr:
- mongo_server
- - first_address
+ - networks
+ - private
+ - 0
server:
get_resource: app_server
signal_transport: HEAT_SIGNAL
value:
get_attr:
- mongo_server
- - first_address
+ - networks
+ - private
+ - 0
nodejs_url:
description: URL for the nodejs server, http://<IP>:3000
value:
get_attr:
- app_server
- - first_address
+ - networks
+ - private
+ - 0
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA template to test usage of different script types like Ansible and Puppet
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with wordpress, web server and mysql on the same server.
value:
get_attr:
- server
- - first_address
+ - networks
+ - private
+ - 0
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with wordpress, web server and mysql on the same server.
value:
get_attr:
- server
- - first_address
+ - networks
+ - private
+ - 0
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
Tosca template for creating an object storage service.
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile that just defines a single compute instance and selects a
value:
get_attr:
- my_server
- - first_address
+ - networks
+ - private
+ - 0
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile that just defines a single compute instance and selects a
value:
get_attr:
- my_server
- - first_address
+ - networks
+ - private
+ - 0
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile that just defines a single compute instance and selects a
value:
get_attr:
- my_server
- - first_address
+ - networks
+ - private
+ - 0
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile that just defines a single compute instance and selects a
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with a software component.
--- /dev/null
+heat_template_version: 2013-05-23
+
+description: >
+ TOSCA simple profile with a software component.
+
+parameters:
+ cpus:
+ type: number
+ description: Number of CPUs for the server.
+ default: 1
+ constraints:
+ - allowed_values:
+ - 1
+ - 2
+ - 4
+ - 8
+
+resources:
+ server1:
+ type: OS::Nova::Server
+ properties:
+ flavor: m1.small
+ image: ubuntu-software-config-os-init
+ user_data_format: SOFTWARE_CONFIG
+ software_config_transport: POLL_SERVER_HEAT
+
+ server2:
+ type: OS::Nova::Server
+ properties:
+ flavor: m1.small
+ image: ubuntu-software-config-os-init
+ user_data_format: SOFTWARE_CONFIG
+ software_config_transport: POLL_SERVER_HEAT
+
+ my_software_create_deploy:
+ type: OS::Heat::SoftwareDeploymentGroup
+ properties:
+ config:
+ get_resource: my_software_create_config
+ signal_transport: HEAT_SIGNAL
+ servers:
+ server1:
+ get_resource: server1
+ server2:
+ get_resource: server2
+
+ my_software_create_config:
+ type: OS::Heat::SoftwareConfig
+ properties:
+ config:
+ get_file: software_install.sh
+ group: script
+
+ my_software_start_deploy:
+ type: OS::Heat::SoftwareDeploymentGroup
+ properties:
+ config:
+ get_resource: my_software_start_config
+ signal_transport: HEAT_SIGNAL
+ servers:
+ server1:
+ get_resource: server1
+ server2:
+ get_resource: server2
+ depends_on:
+ - my_software_create_deploy
+
+ my_software_start_config:
+ type: OS::Heat::SoftwareConfig
+ properties:
+ config:
+ get_file: software_start.sh
+ group: script
+
+outputs: {}
value:
get_attr:
- MM_Active_Host
- - first_address
+ - networks
+ - private
+ - 0
private_ip_of_CM:
description: The private IP address of the CM.
value:
get_attr:
- CM_Active_Host
- - first_address
+ - networks
+ - private
+ - 0
private_ip_of_DM:
description: The private IP address of the DM.
value:
get_attr:
- DM_Host
- - first_address
+ - networks
+ - private
+ - 0
private_ip_of_LB:
description: The private IP address of the LB.
value:
get_attr:
- LB_Host
- - first_address
+ - networks
+ - private
+ - 0
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with a web application.
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA template to test Compute node with interface
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
Template for deploying a single server with predefined properties.
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with 1 server bound to a new network
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with 1 server bound to 3 networks
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with 1 server bound to an existing network
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with 2 servers bound to the 1 network
--- /dev/null
+heat_template_version: 2013-05-23
+description: Tacker Scaling template
+resources:
+ VDU1:
+ type: OS::Nova::Server
+ properties:
+ user_data_format: SOFTWARE_CONFIG
+ software_config_transport: POLL_SERVER_HEAT
+ availability_zone: nova
+ image: cirros-0.3.4-x86_64-uec
+ flavor: m1.tiny
+ networks:
+ - port: { get_resource: CP1 }
+ config_drive: false
+ CP1:
+ type: OS::Neutron::Port
+ properties:
+ anti_spoofing_protection: false
+ management: true
+ network: net_mgmt
+ CP2:
+ type: OS::Neutron::Port
+ properties:
+ anti_spoofing_protection: false
+ management: true
+ network: net_mgmt
+ VDU2:
+ type: OS::Nova::Server
+ properties:
+ user_data_format: SOFTWARE_CONFIG
+ software_config_transport: POLL_SERVER_HEAT
+ availability_zone: nova
+ image: cirros-0.3.4-x86_64-uec
+ flavor: m1.tiny
+ networks:
+ - port: { get_resource: CP2 }
+ config_drive: false
+ VL1:
+ type: OS::Neutron::Net
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
Template for deploying a single server with predefined properties.
parameters: {}
resources:
+
VDU1:
type: OS::Nova::Server
properties:
- flavor: m1.tiny
+ flavor: m1.medium
image: rhel-6.5-test-image
networks:
- port: { get_resource: CP1 }
user_data_format: SOFTWARE_CONFIG
software_config_transport: POLL_SERVER_HEAT
+
depends_on:
- VDU2
- BlockStorage
VDU2:
type: OS::Nova::Server
properties:
- flavor: m1.tiny
+ flavor: m1.medium
image: rhel-6.5-test-image
networks:
- port: { get_resource: CP2 }
BlockStorage:
type: OS::Cinder::Volume
properties:
- size: 1
+ size: 10
tosca.relationships.attachesto_1:
type: OS::Cinder::VolumeAttachment
properties:
instance_uuid:
get_resource: VDU1
- mountpoint: /dev/vdb1
+ mountpoint: /data
volume_id:
get_resource: BlockStorage
--- /dev/null
+heat_template_version: 2013-05-23
+
+description: >
+ Template for deploying servers based on policies.
+
+parameters: {}
+resources:
+ SP1_scale_out:
+ type: OS::Heat::ScalingPolicy
+ properties:
+ auto_scaling_group_id:
+ get_resource: SP1_group
+ adjustment_type: change_in_capacity
+ scaling_adjustment: 1
+ SP1_group:
+ type: OS::Heat::AutoScalingGroup
+ properties:
+ min_size: 1
+ desired_capacity: 2
+ resource:
+ type: SP1_res.yaml
+ max_size: 3
+ SP1_scale_in:
+ type: OS::Heat::ScalingPolicy
+ properties:
+ auto_scaling_group_id:
+ get_resource: SP1_group
+ adjustment_type: change_in_capacity
+ scaling_adjustment: -1
+outputs: {}
\ No newline at end of file
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
Template for deploying the nodes based on given policies.
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with server and attached block storage using the normative
value:
get_attr:
- my_server
- - first_address
+ - networks
+ - private
+ - 0
volume_id:
description: The volume id of the block storage instance.
value:
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with a Single Block Storage node shared by 2-Tier
value:
get_attr:
- my_web_app_tier_1
- - first_address
+ - networks
+ - private
+ - 0
private_ip_2:
description: The private IP address of the applications second tier.
value:
get_attr:
- my_web_app_tier_2
- - first_address
+ - networks
+ - private
+ - 0
volume_id:
description: The volume id of the block storage instance.
value:
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with a Single Block Storage node shared by 2-Tier
value:
get_attr:
- my_web_app_tier_1
- - first_address
+ - networks
+ - private
+ - 0
private_ip_2:
description: The private IP address of the applications second tier.
value:
get_attr:
- my_web_app_tier_2
- - first_address
+ - networks
+ - private
+ - 0
volume_id:
description: The volume id of the block storage instance.
value:
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with a single Block Storage node shared by 2-Tier
value:
get_attr:
- my_web_app_tier_1
- - first_address
+ - networks
+ - private
+ - 0
private_ip_2:
description: The private IP address of the applications second tier.
value:
get_attr:
- my_web_app_tier_2
- - first_address
+ - networks
+ - private
+ - 0
volume_id:
description: The volume id of the block storage instance.
value:
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with a single Block Storage node shared by 2-Tier
value:
get_attr:
- my_web_app_tier_1
- - first_address
+ - networks
+ - private
+ - 0
private_ip_2:
description: The private IP address of the applications second tier.
value:
get_attr:
- my_web_app_tier_2
- - first_address
+ - networks
+ - private
+ - 0
volume_id:
description: The volume id of the block storage instance.
value:
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with server and attached block storage using a custom
value:
get_attr:
- my_server
- - first_address
+ - networks
+ - private
+ - 0
volume_id:
description: The volume id of the block storage instance.
value:
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with server and attached block storage using a named
value:
get_attr:
- my_server
- - first_address
+ - networks
+ - private
+ - 0
volume_id:
description: The volume id of the block storage instance.
value:
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with 2 servers each with different attached block storage.
value:
get_attr:
- my_server
- - first_address
+ - networks
+ - private
+ - 0
server_ip_2:
description: The private IP address of the applications second server.
value:
get_attr:
- my_server2
- - first_address
+ - networks
+ - private
+ - 0
volume_id_1:
description: The volume id of the first block storage instance.
value:
-heat_template_version: 2014-10-16
+heat_template_version: 2013-05-23
description: >
TOSCA simple profile with 2 servers each with different attached block storage.
value:
get_attr:
- my_server
- - first_address
+ - networks
+ - private
+ - 0
server_ip_2:
description: The private IP address of the applications second server.
value:
get_attr:
- my_server2
- - first_address
+ - networks
+ - private
+ - 0
volume_id_1:
description: The volume id of the first block storage instance.
value:
--- /dev/null
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: TOSCA template to test Compute node with interface
+
+node_types:
+ tosca.nodes.CustomCompute:
+ derived_from: tosca.nodes.Compute
+ properties:
+ install_path:
+ type: string
+ default: /opt
+ interfaces:
+ Standard:
+ create:
+ implementation: install.sh
+ inputs:
+ install_path: { get_property: [ SELF, install_path ] }
+
+topology_template:
+ node_templates:
+
+ softwarecomponent_without_behavior:
+ type: tosca.nodes.SoftwareComponent
+ requirements:
+ - host: server
+
+ softwarecomponent_depending_on_customcompute_install:
+ type: tosca.nodes.SoftwareComponent
+ interfaces:
+ Standard:
+ create:
+ implementation: post_install.sh
+ requirements:
+ - host: server
+
+ server:
+ type: tosca.nodes.CustomCompute
+ capabilities:
+ host:
+ properties:
+ num_cpus: 1
+ mem_size: 1 GB
+ os:
+ properties:
+ type: Linux
+ distribution: Ubuntu
+ version: 12.04
+ architecture: x86_64
--- /dev/null
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: >
+ TOSCA template to test usage of different script types like
+ Ansible and Puppet one.
+
+topology_template:
+
+ node_templates:
+ customwebserver:
+ type: tosca.nodes.WebServer
+ requirements:
+ - host: server
+ interfaces:
+ Standard:
+ create:
+ implementation: install.yaml
+ configure:
+ implementation: configure.yml
+ start:
+ implementation: start.pp
+
+ customwebserver2:
+ type: tosca.nodes.WebServer
+ requirements:
+ - host: server
+ interfaces:
+ Standard:
+ create:
+ implementation: install.sh
+ configure:
+ implementation: configure.py
+ start:
+ implementation: start.sh
+
+ server:
+ type: tosca.nodes.Compute
+ capabilities:
+ host:
+ properties:
+ num_cpus: 1
+ mem_size: 1 GB
+ os:
+ properties:
+ type: Linux
+ distribution: Ubuntu
+ version: 12.04
+ architecture: x86_64
--- /dev/null
+tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
+
+data_types:
+ tosca.datatypes.tacker.ActionMap:
+ properties:
+ trigger:
+ type: string
+ required: true
+ action:
+ type: string
+ required: true
+ params:
+ type: map
+ entry_schema:
+ type: string
+ required: false
+
+ tosca.datatypes.tacker.MonitoringParams:
+ properties:
+ monitoring_delay:
+ type: int
+ required: false
+ count:
+ type: int
+ required: false
+ interval:
+ type: int
+ required: false
+ timeout:
+ type: int
+ required: false
+ retry:
+ type: int
+ required: false
+ port:
+ type: int
+ required: false
+
+ tosca.datatypes.tacker.MonitoringType:
+ properties:
+ name:
+ type: string
+ required: true
+ actions:
+ type: map
+ required: true
+ parameters:
+ type: tosca.datatypes.tacker.MonitoringParams
+ required: false
+
+ tosca.datatypes.compute_properties:
+ properties:
+ num_cpus:
+ type: integer
+ required: false
+ mem_size:
+ type: string
+ required: false
+ disk_size:
+ type: string
+ required: false
+ mem_page_size:
+ type: string
+ required: false
+ numa_node_count:
+ type: integer
+ constraints:
+ - greater_or_equal: 2
+ required: false
+ numa_nodes:
+ type: map
+ required: false
+ cpu_allocation:
+ type: map
+ required: false
+
+policy_types:
+ tosca.policies.tacker.Placement:
+ derived_from: tosca.policies.Root
+
+ tosca.policies.tacker.Failure:
+ derived_from: tosca.policies.Root
+ action:
+ type: string
+
+ tosca.policies.tacker.Failure.Respawn:
+ derived_from: tosca.policies.tacker.Failure
+ action: respawn
+
+ tosca.policies.tacker.Failure.Terminate:
+ derived_from: tosca.policies.tacker.Failure
+ action: log_and_kill
+
+ tosca.policies.tacker.Failure.Log:
+ derived_from: tosca.policies.tacker.Failure
+ action: log
+
+ tosca.policies.tacker.Monitoring:
+ derived_from: tosca.policies.Root
+ properties:
+ name:
+ type: string
+ required: true
+ parameters:
+ type: map
+ entry_schema:
+ type: string
+ required: false
+ actions:
+ type: map
+ entry_schema:
+ type: string
+ required: true
+
+ tosca.policies.tacker.Monitoring.NoOp:
+ derived_from: tosca.policies.tacker.Monitoring
+ properties:
+ name: noop
+
+ tosca.policies.tacker.Monitoring.Ping:
+ derived_from: tosca.policies.tacker.Monitoring
+ properties:
+ name: ping
+
+ tosca.policies.tacker.Monitoring.HttpPing:
+ derived_from: tosca.policies.tacker.Monitoring.Ping
+ properties:
+ name: http-ping
+
+ tosca.policies.tacker.Alarming:
+ derived_from: tosca.policies.Monitoring
+ triggers:
+ resize_compute:
+ event_type:
+ type: map
+ entry_schema:
+ type: string
+ required: true
+ metrics:
+ type: string
+ required: true
+ condition:
+ type: map
+ entry_schema:
+ type: string
+ required: false
+ action:
+ type: map
+ entry_schema:
+ type: string
+ required: true
+
+ tosca.policies.tacker.Scaling:
+ derived_from: tosca.policies.Scaling
+ description: Defines policy for scaling the given targets.
+ properties:
+ increment:
+ type: integer
+ required: true
+ description: Number of nodes to add or remove during the scale out/in.
+ targets:
+ type: list
+ entry_schema:
+ type: string
+ required: true
+ description: List of Scaling nodes.
+ min_instances:
+ type: integer
+ required: true
+ description: Minimum number of instances to scale in.
+ max_instances:
+ type: integer
+ required: true
+ description: Maximum number of instances to scale out.
+ default_instances:
+ type: integer
+ required: true
+ description: Initial number of instances.
+ cooldown:
+ type: integer
+ required: false
+ default: 120
+ description: Wait time (in seconds) between consecutive scaling operations. During the cooldown period...
--- /dev/null
+tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
+
+data_types:
+ tosca.nfv.datatypes.pathType:
+ properties:
+ forwarder:
+ type: string
+ required: true
+ capability:
+ type: string
+ required: true
+
+ tosca.nfv.datatypes.aclType:
+ properties:
+ eth_type:
+ type: string
+ required: false
+ eth_src:
+ type: string
+ required: false
+ eth_dst:
+ type: string
+ required: false
+ vlan_id:
+ type: integer
+ constraints:
+ - in_range: [ 1, 4094 ]
+ required: false
+ vlan_pcp:
+ type: integer
+ constraints:
+ - in_range: [ 0, 7 ]
+ required: false
+ mpls_label:
+ type: integer
+ constraints:
+ - in_range: [ 16, 1048575]
+ required: false
+ mpls_tc:
+ type: integer
+ constraints:
+ - in_range: [ 0, 7 ]
+ required: false
+ ip_dscp:
+ type: integer
+ constraints:
+ - in_range: [ 0, 63 ]
+ required: false
+ ip_ecn:
+ type: integer
+ constraints:
+ - in_range: [ 0, 3 ]
+ required: false
+ ip_src_prefix:
+ type: string
+ required: false
+ ip_dst_prefix:
+ type: string
+ required: false
+ ip_proto:
+ type: integer
+ constraints:
+ - in_range: [ 1, 254 ]
+ required: false
+ destination_port_range:
+ type: string
+ required: false
+ source_port_range:
+ type: string
+ required: false
+ network_src_port_id:
+ type: string
+ required: false
+ network_dst_port_id:
+ type: string
+ required: false
+ network_id:
+ type: string
+ required: false
+ network_name:
+ type: string
+ required: false
+ tenant_id:
+ type: string
+ required: false
+ icmpv4_type:
+ type: integer
+ constraints:
+ - in_range: [ 0, 254 ]
+ required: false
+ icmpv4_code:
+ type: integer
+ constraints:
+ - in_range: [ 0, 15 ]
+ required: false
+ arp_op:
+ type: integer
+ constraints:
+ - in_range: [ 1, 25 ]
+ required: false
+ arp_spa:
+ type: string
+ required: false
+ arp_tpa:
+ type: string
+ required: false
+ arp_sha:
+ type: string
+ required: false
+ arp_tha:
+ type: string
+ required: false
+ ipv6_src:
+ type: string
+ required: false
+ ipv6_dst:
+ type: string
+ required: false
+ ipv6_flabel:
+ type: integer
+ constraints:
+ - in_range: [ 0, 1048575]
+ required: false
+ icmpv6_type:
+ type: integer
+ constraints:
+ - in_range: [ 0, 255]
+ required: false
+ icmpv6_code:
+ type: integer
+ constraints:
+ - in_range: [ 0, 7]
+ required: false
+ ipv6_nd_target:
+ type: string
+ required: false
+ ipv6_nd_sll:
+ type: string
+ required: false
+ ipv6_nd_tll:
+ type: string
+ required: false
+
+ tosca.nfv.datatypes.policyType:
+ properties:
+ type:
+ type: string
+ required: false
+ constraints:
+ - valid_values: [ ACL ]
+ criteria:
+ type: list
+ required: true
+ entry_schema:
+ type: tosca.nfv.datatypes.aclType
+
+node_types:
+ tosca.nodes.nfv.VDU.Tacker:
+ derived_from: tosca.nodes.nfv.VDU
+ capabilities:
+ nfv_compute:
+ type: tosca.datatypes.compute_properties
+ properties:
+ name:
+ type: string
+ required: false
+ image:
+# type: tosca.artifacts.Deployment.Image.VM
+ type: string
+ required: false
+ flavor:
+ type: string
+ required: false
+ availability_zone:
+ type: string
+ required: false
+ metadata:
+ type: map
+ entry_schema:
+ type: string
+ required: false
+ config_drive:
+ type: boolean
+ default: false
+ required: false
+
+ placement_policy:
+# type: tosca.policies.tacker.Placement
+ type: string
+ required: false
+
+ monitoring_policy:
+# type: tosca.policies.tacker.Monitoring
+# type: tosca.datatypes.tacker.MonitoringType
+ type: map
+ required: false
+
+ config:
+ type: string
+ required: false
+
+ mgmt_driver:
+ type: string
+ default: noop
+ required: false
+
+ service_type:
+ type: string
+ required: false
+
+ user_data:
+ type: string
+ required: false
+
+ user_data_format:
+ type: string
+ required: false
+
+ key_name:
+ type: string
+ required: false
+
+ tosca.nodes.nfv.CP.Tacker:
+ derived_from: tosca.nodes.nfv.CP
+ properties:
+ mac_address:
+ type: string
+ required: false
+ name:
+ type: string
+ required: false
+ management:
+ type: boolean
+ required: false
+ anti_spoofing_protection:
+ type: boolean
+ required: false
+ security_groups:
+ type: list
+ required: false
+ type:
+ type: string
+ required: false
+ constraints:
+ - valid_values: [ sriov, vnic ]
+
+ tosca.nodes.nfv.FP.Tacker:
+ derived_from: tosca.nodes.Root
+ properties:
+ id:
+ type: integer
+ required: false
+ policy:
+ type: tosca.nfv.datatypes.policyType
+ required: true
+ description: policy to use to match traffic for this FP
+ path:
+ type: list
+ required: true
+ entry_schema:
+ type: tosca.nfv.datatypes.pathType
--- /dev/null
+tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
+
+description: >
+ Template for deploying servers based on policies.
+
+imports:
+ - tacker_defs.yaml
+ - tacker_nfv_defs.yaml
+
+topology_template:
+ node_templates:
+ VDU1:
+ type: tosca.nodes.nfv.VDU.Tacker
+ properties:
+ image: cirros-0.3.4-x86_64-uec
+ mgmt_driver: noop
+ availability_zone: nova
+ flavor: m1.tiny
+
+ CP1:
+ type: tosca.nodes.nfv.CP.Tacker
+ properties:
+ management: true
+ order: 0
+ anti_spoofing_protection: false
+ requirements:
+ - virtualLink:
+ node: VL1
+ - virtualBinding:
+ node: VDU1
+
+ VDU2:
+ type: tosca.nodes.nfv.VDU.Tacker
+ properties:
+ image: cirros-0.3.4-x86_64-uec
+ mgmt_driver: noop
+ availability_zone: nova
+ flavor: m1.tiny
+
+ CP2:
+ type: tosca.nodes.nfv.CP.Tacker
+ properties:
+ management: true
+ order: 0
+ anti_spoofing_protection: false
+ requirements:
+ - virtualLink:
+ node: VL1
+ - virtualBinding:
+ node: VDU2
+
+ VL1:
+ type: tosca.nodes.nfv.VL
+ properties:
+ network_name: net_mgmt
+ vendor: Tacker
+
+ policies:
+ - SP1:
+ type: tosca.policies.tacker.Scaling
+ targets: [VDU1, VDU2]
+ properties:
+ increment: 1
+ cooldown: 120
+ min_instances: 1
+ max_instances: 3
+ default_instances: 2
--- /dev/null
+tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
+
+description: Template for deploying a single server with predefined properties.
+
+topology_template:
+ node_templates:
+
+ VDU1:
+ type: tosca.nodes.nfv.VDU
+ capabilities:
+ host:
+ properties:
+ num_cpus: 2
+ disk_size: 10 GB
+ mem_size: 512 MB
+ # Guest Operating System properties
+ os:
+ properties:
+ # host Operating System image properties
+ architecture: x86_64
+ type: Linux
+ distribution: RHEL
+ version: 6.5
+ requirements:
+ - high_availability: VDU2
+ - local_storage:
+ node: BlockStorage
+ relationship:
+ type: tosca.relationships.AttachesTo
+ properties:
+ location: /data
+
+
+ BlockStorage:
+ type: tosca.nodes.BlockStorage
+ properties:
+ size: 10 GB
+
+ VDU2:
+ type: tosca.nodes.nfv.VDU
+ capabilities:
+ host:
+ properties:
+ num_cpus: 2
+ disk_size: 10 GB
+ mem_size: 512 MB
+ # Guest Operating System properties
+ os:
+ properties:
+ # host Operating System image properties
+ architecture: x86_64
+ type: Linux
+ distribution: RHEL
+ version: 6.5
+
+ CP1:
+ type: tosca.nodes.nfv.CP
+ properties:
+ ip_address: 192.168.0.55
+ requirements:
+ - virtualLink:
+ node: VL1
+# relationship: tosca.relationships.nfv.VirtualLinksTo
+ - virtualBinding:
+ node: VDU1
+ relationship: tosca.relationships.nfv.VirtualBindsTo
+
+ CP2:
+ type: tosca.nodes.nfv.CP
+ properties:
+ ip_address: 192.168.0.56
+ requirements:
+ - virtualLink:
+ node: VL1
+# relationship: tosca.relationships.nfv.VirtualLinksTo
+ - virtualBinding:
+ node: VDU2
+ relationship: tosca.relationships.nfv.VirtualBindsTo
+
+ VL1:
+ type: tosca.nodes.nfv.VL
+ properties:
+ vendor: ACME
+ cidr: '192.168.0.0/24'
+ start_ip: '192.168.0.50'
+ end_ip: '192.168.0.200'
+ gateway_ip: '192.168.0.1'
+ network_name: test_net
+ network_type: vxlan
+ segmentation_id: 100
--- /dev/null
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: Template for deploying the nodes based on given policies.
+
+topology_template:
+ node_templates:
+ my_server:
+ type: tosca.nodes.Compute
+ capabilities:
+ # Host container properties
+ host:
+ properties:
+ num_cpus: 2
+ disk_size: 10 GB
+ mem_size: 512 MB
+ # Guest Operating System properties
+ os:
+ properties:
+ # host Operating System image properties
+ architecture: x86_64
+ type: Linux
+ distribution: RHEL
+ version: 6.5
+ policies:
+ - my_compute_placement_policy:
+ type: tosca.policies.Placement
+ description: Apply my placement policy to my application’s servers
+ targets: [ my_server ]
tosca_definitions_version: tosca_simple_yaml_1_0
-description: TOSCA template to test artifact usage
+description: TOSCA template to test file and Ansible Galaxy role artifacts
node_types:
tosca.nodes.CustomWebServer:
derived_from: tosca.nodes.WebServer
artifacts:
+ my_galaxyansible_role:
+ file: user.role
+ type: tosca.artifacts.AnsibleGalaxy.role
web_content:
file: http://www.mycompany.org/content.tgz
type: tosca.artifacts.File
description: Template for deploying a server with custom properties for image, flavor and key_name.
node_types:
- tosca.nodes.nfv.VDU:
+ tosca.nodes.nfv.MyType:
derived_from: tosca.nodes.Compute
properties:
key_name:
node_templates:
my_server:
- type: tosca.nodes.nfv.VDU
+ type: tosca.nodes.nfv.MyType
properties:
flavor: m1.medium
image: rhel-6.5-test-image
description: TOSCA template to test get_* functions semantic
node_types:
+ tosca.capabilities.custom.Endpoint:
+ derived_from: tosca.capabilities.Endpoint
+ attributes:
+ credential:
+ type: tosca.datatypes.Credential
+
tosca.capabilities.MyFeature:
derived_from: tosca.capabilities.Root
properties:
myfeature:
type: tosca.capabilities.MyFeature
+ tosca.nodes.custom.Compute:
+ derived_from: tosca.nodes.Compute
+ capabilities:
+ endpoint:
+ type: tosca.capabilities.custom.Endpoint
+
topology_template:
inputs:
map_val:
node_templates:
server:
- type: tosca.nodes.Compute
+ type: tosca.nodes.custom.Compute
capabilities:
host:
properties:
test_list_of_functions:
value: [ { get_property: [ myapp, myfeature, my_map, test_key ] }, { get_property: [ myapp, myfeature, my_map, test_key_static ] } ]
+
+ # should not be translated : complex type
+ credential:
+ value: { get_attribute: [server, endpoint, credential] }
--- /dev/null
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: Template to test unsupported translation. Load Balancer is a
+ > valid TOSCA type but not supported in translator yet.
+
+topology_template:
+ node_templates:
+ simple_load_balancer:
+ type: tosca.nodes.LoadBalancer
+ capabilities:
+ client:
+ properties:
+ network_name: PUBLIC
+ floating: true
+ dns_name: http://mycompany.com/
\ No newline at end of file
--- /dev/null
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: >
+ TOSCA simple profile with a software component.
+
+topology_template:
+ inputs:
+ cpus:
+ type: integer
+ description: Number of CPUs for the server.
+ constraints:
+ - valid_values: [ 1, 2, 4, 8 ]
+ default: 1
+
+ node_templates:
+ my_software:
+ type: tosca.nodes.SoftwareComponent
+ properties:
+ component_version: 1.0
+ requirements:
+ - host: server1
+ - host: server2
+ interfaces:
+ Standard:
+ create: software_install.sh
+ start: software_start.sh
+
+ server1:
+ type: tosca.nodes.Compute
+ capabilities:
+ host:
+ properties:
+ disk_size: 10 GB
+ num_cpus: { get_input: cpus }
+ mem_size: 1024 MB
+ os:
+ properties:
+ architecture: x86_64
+ type: Linux
+ distribution: Ubuntu
+ version: 14.04
+ server2:
+ type: tosca.nodes.Compute
+ capabilities:
+ host:
+ properties:
+ disk_size: 10 GB
+ num_cpus: { get_input: cpus }
+ mem_size: 1024 MB
+ os:
+ properties:
+ architecture: x86_64
+ type: Linux
+ distribution: Ubuntu
+ version: 14.04
# License for the specific language governing permissions and limitations
# under the License.
-import ast
+
import json
+import mock
import os
import shutil
import tempfile
-from mock import patch
from toscaparser.common import exception
from toscaparser.utils.gettextutils import _
import translator.shell as shell
'--parameters=key'))
def test_valid_template(self):
- try:
- shell.main([self.template_file, self.template_type])
- except Exception:
- self.fail(self.failure_msg)
+ shell.main([self.template_file, self.template_type])
def test_valid_template_without_type(self):
try:
self.assertTrue(temp_dir is None or
not os.path.exists(temp_dir))
- @patch('uuid.uuid4')
- @patch('translator.common.utils.check_for_env_variables')
- @patch('requests.post')
- @patch('translator.common.utils.get_url_for')
- @patch('translator.common.utils.get_token_id')
- @patch('os.getenv')
- @patch('translator.hot.tosca.tosca_compute.'
- 'ToscaCompute._create_nova_flavor_dict')
- @patch('translator.hot.tosca.tosca_compute.'
- 'ToscaCompute._populate_image_dict')
- def test_template_deploy_with_credentials(self, mock_populate_image_dict,
- mock_flavor_dict,
- mock_os_getenv,
- mock_token,
- mock_url, mock_post,
- mock_env,
- mock_uuid):
+ @mock.patch('uuid.uuid4')
+ @mock.patch.object(shell.TranslatorShell, '_create_stack')
+ @mock.patch('keystoneauth1.loading.load_auth_from_argparse_arguments')
+ @mock.patch('keystoneauth1.loading.load_session_from_argparse_arguments')
+ @mock.patch('translator.common.flavors.get_flavors')
+ @mock.patch('translator.common.images.get_images')
+ def test_template_deploy(self, mock_populate_image_dict,
+ mock_flavor_dict,
+ mock_keystone_session,
+ mock_keystone_auth,
+ mock_client,
+ mock_uuid):
mock_uuid.return_value = 'abcXXX-abcXXX'
- mock_env.return_value = True
mock_flavor_dict.return_value = {
'm1.medium': {'mem_size': 4096, 'disk_size': 40, 'num_cpus': 2}
}
"type": "Linux"
}
}
- mock_url.return_value = 'http://abc.com'
- mock_token.return_value = 'mock_token'
- mock_os_getenv.side_effect = ['demo', 'demo',
- 'demo', 'http://www.abc.com']
+
try:
data = {
- 'stack_name': 'heat_tosca_helloworld_abcXXX',
+ 'outputs': {},
+ 'heat_template_version': '2013-05-23',
+ 'description': 'Template for deploying a single server '
+ 'with predefined properties.\n',
'parameters': {},
- 'template': {
- 'outputs': {},
- 'heat_template_version': '2014-10-16',
- 'description': 'Template for deploying a single server '
- 'with predefined properties.\n',
- 'parameters': {},
- 'resources': {
- 'my_server': {
- 'type': 'OS::Nova::Server',
- 'properties': {
- 'flavor': 'm1.medium',
- 'user_data_format': 'SOFTWARE_CONFIG',
- 'software_config_transport':
- 'POLL_SERVER_HEAT',
- 'image': 'rhel-6.5-test-image'
- }
+ 'resources': {
+ 'my_server': {
+ 'type': 'OS::Nova::Server',
+ 'properties': {
+ 'flavor': 'm1.medium',
+ 'user_data_format': 'SOFTWARE_CONFIG',
+ 'software_config_transport': 'POLL_SERVER_HEAT',
+ 'image': 'rhel-6.5-test-image'
}
}
}
}
mock_heat_res = {
- "stack": {
- "id": 1234
- }
- }
- headers = {
- 'Content-Type': 'application/json',
- 'X-Auth-Token': 'mock_token'
+ "stacks": [
+ {
+ "id": "d648ad27-fb9c-44d1-b293-646ea6c4f8da",
+ "stack_status": "CREATE_IN_PROGRESS",
+ }
+ ]
}
class mock_response(object):
self._content = _content
mock_response_obj = mock_response(201, json.dumps(mock_heat_res))
- mock_post.return_value = mock_response_obj
- shell.main([self.template_file, self.template_type,
- "--deploy"])
- args, kwargs = mock_post.call_args
- self.assertEqual(args[0], 'http://abc.com/stacks')
- self.assertEqual(ast.literal_eval(kwargs['data']), data)
- self.assertEqual(kwargs['headers'], headers)
- except Exception:
- self.fail(self.failure_msg)
+ mock_client.return_value = mock_response_obj
+ shell.main([self.template_file, self.template_type, "--deploy"])
+ args, kwargs = mock_client.call_args
+ self.assertEqual(kwargs["stack_name"],
+ 'heat_tosca_helloworld_abcXXX')
+ self.assertEqual(kwargs["template"], data)
+ except Exception as e:
+ self.fail(e)
from toscaparser.common.exception import ExceptionCollector
from toscaparser.common.exception import URLException
from toscaparser.common.exception import ValidationError
+from toscaparser.tosca_template import ToscaTemplate
from toscaparser.utils.gettextutils import _
+from translator.common.exception import UnsupportedTypeError
from translator.common.utils import TranslationUtils
+from translator.hot.tosca_translator import TOSCATranslator
from translator.tests.base import TestCase
class ToscaHotTranslationTest(TestCase):
- def test_hot_translate_single_server(self):
- tosca_file = '../tests/data/tosca_single_server.yaml'
- hot_file = '../tests/data/hot_output/hot_single_server.yaml'
- params = {'cpus': 1}
+ def _test_successful_translation(self, tosca_file, hot_files, params=None):
+ if not params:
+ params = {}
+ if not isinstance(hot_files, list):
+ hot_files = [hot_files]
diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
+ hot_files,
params)
self.assertEqual({}, diff, '<difference> : ' +
json.dumps(diff, indent=4, separators=(', ', ': ')))
+ def _test_failed_translation(self, tosca_file, hot_file, params, msg,
+ msg_path, error_raise, error_collect):
+ if msg_path:
+ path = os.path.normpath(os.path.join(
+ os.path.dirname(os.path.realpath(__file__)), tosca_file))
+ msg = msg % path
+ self.assertRaises(
+ error_raise,
+ TranslationUtils.compare_tosca_translation_with_hot,
+ tosca_file, hot_file, params)
+ ExceptionCollector.assertExceptionMessage(error_collect, msg)
+
+ def test_hot_translate_single_server(self):
+ tosca_file = '../tests/data/tosca_single_server.yaml'
+ hot_file = '../tests/data/hot_output/hot_single_server.yaml'
+ params = {'cpus': 1}
+ self._test_successful_translation(tosca_file, hot_file, params)
+
def test_hot_translate_single_server_with_defaults(self):
tosca_file = \
'../tests/data/tosca_single_server_with_defaults.yaml'
+
hot_file_with_input = '../tests/data/hot_output/' \
'hot_single_server_with_defaults_with_input.yaml'
- hot_file_without_input = '../tests/data/hot_output/' \
- 'hot_single_server_with_defaults_without_input.yaml'
-
params1 = {'cpus': '1'}
- diff1 = TranslationUtils.compare_tosca_translation_with_hot(
- tosca_file, hot_file_with_input, params1)
- self.assertEqual({}, diff1, '<difference> : ' +
- json.dumps(diff1, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file_with_input,
+ params1)
+ hot_file_without_input = '../tests/data/hot_output/' \
+ 'hot_single_server_with_defaults_without_input.yaml'
params2 = {}
- diff2 = TranslationUtils.compare_tosca_translation_with_hot(
- tosca_file, hot_file_without_input, params2)
- self.assertEqual({}, diff2, '<difference> : ' +
- json.dumps(diff2, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file_without_input,
+ params2)
def test_hot_translate_wordpress_single_instance(self):
tosca_file = '../tests/data/tosca_single_instance_wordpress.yaml'
'db_root_pwd': 'passw0rd',
'db_port': 3366,
'cpus': 8}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_helloworld(self):
tosca_file = '../tests/data/tosca_helloworld.yaml'
hot_file = '../tests/data/hot_output/hot_hello_world.yaml'
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- {})
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file)
def test_hot_translate_host_assignment(self):
tosca_file = '../tests/data/test_host_assignment.yaml'
hot_file = '../tests/data/hot_output/hot_host_assignment.yaml'
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- {})
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file)
def test_hot_translate_elk(self):
tosca_file = '../tests/data/tosca_elk.yaml'
params = {'github_url':
'http://github.com/paypal/rest-api-sample-app-nodejs.git',
'my_cpus': 4}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_nodejs_mongodb_two_instances(self):
tosca_file = '../tests/data/tosca_nodejs_mongodb_two_instances.yaml'
params = {'github_url':
'http://github.com/paypal/rest-api-sample-app-nodejs.git',
'my_cpus': 4}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_blockstorage_with_attachment(self):
tosca_file = '../tests/data/storage/' \
'storage_location': '/dev/vdc',
'storage_size': '2000 MB',
'storage_snapshot_id': 'ssid'}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_blockstorage_with_custom_relationship_type(self):
tosca_file = '../tests/data/storage/' \
'storage_location': '/dev/vdc',
'storage_size': '1 GB',
'storage_snapshot_id': 'ssid'}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_blockstorage_with_relationship_template(self):
tosca_file = '../tests/data/storage/' \
params = {'cpus': 1,
'storage_location': '/dev/vdc',
'storage_size': '1 GB'}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_blockstorage_with_attachment_notation1(self):
tosca_file = '../tests/data/storage/' \
'storage_location': 'some_folder',
'storage_size': '1 GB',
'storage_snapshot_id': 'ssid'}
- diff1 = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file1,
- params)
+
try:
- self.assertEqual({}, diff1, '<difference> : ' +
- json.dumps(diff1, indent=4,
- separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file1, params)
except Exception:
- diff2 = TranslationUtils.compare_tosca_translation_with_hot(
- tosca_file, hot_file2, params)
- self.assertEqual({}, diff2, '<difference> : ' +
- json.dumps(diff2, indent=4,
- separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file2, params)
def test_hot_translate_blockstorage_with_attachment_notation2(self):
tosca_file = '../tests/data/storage/' \
'storage_location': '/dev/vdc',
'storage_size': '1 GB',
'storage_snapshot_id': 'ssid'}
- diff1 = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file1,
- params)
try:
- self.assertEqual({}, diff1, '<difference> : ' +
- json.dumps(diff1, indent=4,
- separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file1, params)
except Exception:
- diff2 = TranslationUtils.compare_tosca_translation_with_hot(
- tosca_file, hot_file2, params)
- self.assertEqual({}, diff2, '<difference> : ' +
- json.dumps(diff2, indent=4,
- separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file2, params)
def test_hot_translate_multiple_blockstorage_with_attachment(self):
tosca_file = '../tests/data/storage/' \
'storage_location': '/dev/vdc',
'storage_size': '1 GB',
'storage_snapshot_id': 'ssid'}
- diff1 = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file1,
- params)
try:
- self.assertEqual({}, diff1, '<difference> : ' +
- json.dumps(diff1, indent=4,
- separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file1, params)
except Exception:
- diff2 = TranslationUtils.compare_tosca_translation_with_hot(
- tosca_file, hot_file2, params)
- self.assertEqual({}, diff2, '<difference> : ' +
- json.dumps(diff2, indent=4,
- separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file2, params)
def test_hot_translate_single_object_store(self):
tosca_file = '../tests/data/storage/tosca_single_object_store.yaml'
hot_file = '../tests/data/hot_output/hot_single_object_store.yaml'
params = {'objectstore_name': 'myobjstore'}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_one_server_one_network(self):
tosca_file = '../tests/data/network/tosca_one_server_one_network.yaml'
hot_file = '../tests/data/hot_output/network/' \
'hot_one_server_one_network.yaml'
params = {'network_name': 'private_net'}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_server_on_existing_network(self):
tosca_file = '../tests/data/network/' \
hot_file = '../tests/data/hot_output/network/' \
'hot_server_on_existing_network.yaml'
params = {'network_name': 'private_net'}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_two_servers_one_network(self):
tosca_file = '../tests/data/network/tosca_two_servers_one_network.yaml'
'network_cidr': '10.0.0.0/24',
'network_start_ip': '10.0.0.100',
'network_end_ip': '10.0.0.150'}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_one_server_three_networks(self):
tosca_file = '../tests/data/network/' \
hot_file = '../tests/data/hot_output/network/' \
'hot_one_server_three_networks.yaml'
params = {}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_software_component(self):
tosca_file = '../tests/data/tosca_software_component.yaml'
hot_file = '../tests/data/hot_output/hot_software_component.yaml'
params = {'cpus': '1',
'download_url': 'http://www.software.com/download'}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
+
+ def test_hot_translate_software_component_multiple_hosts(self):
+ tosca_file = '../tests/data/tosca_software_component'\
+ '_multiple_hosts.yaml'
+ hot_file = '../tests/data/hot_output/hot_software_component'\
+ '_multiple_hosts.yaml'
+ params = {'cpus': '1',
+ 'download_url': 'http://www.software.com/download'}
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_web_application(self):
tosca_file = '../tests/data/tosca_web_application.yaml'
hot_file = '../tests/data/hot_output/hot_web_application.yaml'
params = {'cpus': '2', 'context_root': 'my_web_app'}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_template_with_url_import(self):
tosca_file = '../tests/data/' \
'db_root_pwd': 'passw0rd',
'db_port': 3366,
'cpus': 8}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_template_by_url_with_local_import(self):
tosca_file = 'https://raw.githubusercontent.com/openstack/' \
'db_root_pwd': 'passw0rd',
'db_port': 3366,
'cpus': 8}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_template_by_url_with_local_abspath_import(self):
tosca_file = 'https://raw.githubusercontent.com/openstack/' \
'db_root_pwd': 'passw0rd',
'db_port': 3366,
'cpus': 8}
-
- self.assertRaises(
- ValidationError,
- TranslationUtils.compare_tosca_translation_with_hot,
- tosca_file, hot_file, params)
expected_msg = _('Absolute file name "/tmp/wordpress.yaml" cannot be '
'used in a URL-based input template "https://raw.'
'githubusercontent.com/openstack/heat-translator/'
'master/translator/tests/data/tosca_single_instance_'
'wordpress_with_local_abspath_import.yaml".')
- ExceptionCollector.assertExceptionMessage(ImportError, expected_msg)
+ msg_path = False
+ self._test_failed_translation(tosca_file, hot_file, params,
+ expected_msg, msg_path, ValidationError,
+ ImportError)
def test_hot_translate_template_by_url_with_url_import(self):
tosca_url = 'https://raw.githubusercontent.com/openstack/' \
'db_root_pwd': 'passw0rd',
'db_port': 3366,
'cpus': 8}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_url,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_url, hot_file, params)
def test_translate_hello_world_csar(self):
tosca_file = '../tests/data/csar_hello_world.zip'
hot_file = '../tests/data/hot_output/hot_hello_world.yaml'
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- {})
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file)
def test_translate_single_instance_wordpress_csar(self):
tosca_file = '../tests/data/csar_single_instance_wordpress.zip'
'db_root_pwd': 'passw0rd',
'db_port': 3366,
'cpus': 8}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_translate_elk_csar_from_url(self):
tosca_file = 'https://github.com/openstack/heat-translator/raw/' \
params = {'github_url':
'http://github.com/paypal/rest-api-sample-app-nodejs.git',
'my_cpus': 4}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_translate_csar_not_zip(self):
tosca_file = '../tests/data/csar_not_zip.zip'
hot_file = ''
params = {}
-
- self.assertRaises(
- ValidationError,
- TranslationUtils.compare_tosca_translation_with_hot,
- tosca_file, hot_file, params)
- path = os.path.normpath(os.path.join(
- os.path.dirname(os.path.realpath(__file__)), tosca_file))
- expected_msg = _('"%s" is not a valid zip file.') % path
- ExceptionCollector.assertExceptionMessage(ValidationError,
- expected_msg)
+ expected_msg = _('"%s" is not a valid zip file.')
+ msg_path = True
+ self._test_failed_translation(tosca_file, hot_file, params,
+ expected_msg, msg_path, ValidationError,
+ ValidationError)
def test_translate_csar_metadata_not_yaml(self):
tosca_file = '../tests/data/csar_metadata_not_yaml.zip'
hot_file = ''
params = {}
-
- self.assertRaises(
- ValidationError,
- TranslationUtils.compare_tosca_translation_with_hot,
- tosca_file, hot_file, params)
- path = os.path.normpath(os.path.join(
- os.path.dirname(os.path.realpath(__file__)), tosca_file))
expected_msg = _('The file "TOSCA-Metadata/TOSCA.meta" in the CSAR '
- '"%s" does not contain valid YAML content.') % path
- ExceptionCollector.assertExceptionMessage(ValidationError,
- expected_msg)
+ '"%s" does not contain valid YAML content.')
+ msg_path = True
+ self._test_failed_translation(tosca_file, hot_file, params,
+ expected_msg, msg_path, ValidationError,
+ ValidationError)
def test_translate_csar_wrong_metadata_file(self):
tosca_file = '../tests/data/csar_wrong_metadata_file.zip'
hot_file = ''
params = {}
-
- self.assertRaises(
- ValidationError,
- TranslationUtils.compare_tosca_translation_with_hot,
- tosca_file, hot_file, params)
- path = os.path.normpath(os.path.join(
- os.path.dirname(os.path.realpath(__file__)), tosca_file))
expected_msg = _('"%s" is not a valid CSAR as it does not contain the '
'required file "TOSCA.meta" in the folder '
- '"TOSCA-Metadata".') % path
- ExceptionCollector.assertExceptionMessage(ValidationError,
- expected_msg)
+ '"TOSCA-Metadata".')
+ msg_path = True
+ self._test_failed_translation(tosca_file, hot_file, params,
+ expected_msg, msg_path, ValidationError,
+ ValidationError)
def test_translate_csar_wordpress_invalid_import_path(self):
tosca_file = '../tests/data/csar_wordpress_invalid_import_path.zip'
hot_file = ''
params = {}
-
- self.assertRaises(
- ValidationError,
- TranslationUtils.compare_tosca_translation_with_hot,
- tosca_file, hot_file, params)
expected_msg = _('Import '
'"Invalid_import_path/wordpress.yaml" is not valid.')
- ExceptionCollector.assertExceptionMessage(ImportError, expected_msg)
+ msg_path = False
+ self._test_failed_translation(tosca_file, hot_file, params,
+ expected_msg, msg_path, ValidationError,
+ ImportError)
def test_translate_csar_wordpress_invalid_script_url(self):
tosca_file = '../tests/data/csar_wordpress_invalid_script_url.zip'
hot_file = ''
params = {}
-
- self.assertRaises(
- ValidationError,
- TranslationUtils.compare_tosca_translation_with_hot,
- tosca_file, hot_file, params)
expected_msg = _('The resource at '
'"https://raw.githubusercontent.com/openstack/'
'heat-translator/master/translator/tests/data/'
'custom_types/wordpress1.yaml" cannot be accessed.')
- ExceptionCollector.assertExceptionMessage(URLException, expected_msg)
+ msg_path = False
+ self._test_failed_translation(tosca_file, hot_file, params,
+ expected_msg, msg_path, ValidationError,
+ URLException)
def test_hot_translate_flavor_image(self):
tosca_file = '../tests/data/test_tosca_flavor_and_image.yaml'
hot_file = '../tests/data/hot_output/hot_flavor_and_image.yaml'
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- {})
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file)
def test_hot_translate_flavor_image_params(self):
tosca_file = '../tests/data/test_tosca_flavor_and_image.yaml'
hot_file = '../tests/data/hot_output/hot_flavor_and_image_params.yaml'
params = {'key_name': 'paramkey'}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_custom_type(self):
tosca_file = '../tests/data/test_tosca_custom_type.yaml'
hot_file = '../tests/data/hot_output/' \
'hot_custom_type.yaml'
params = {}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_custom_type_with_override(self):
tosca_file = '../tests/data/test_tosca_custom_type_with_override.yaml'
hot_file = '../tests/data/hot_output/' \
'hot_custom_type_with_override.yaml'
params = {}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_custom_type_with_param_override(self):
tosca_file = '../tests/data/test_tosca_custom_type_with_override.yaml'
hot_file = '../tests/data/hot_output/' \
'hot_custom_type_with_param_override.yaml'
params = {'install_path': '/home/custom/from/cli'}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_artifact(self):
tosca_file = '../tests/data/test_tosca_artifact.yaml'
hot_file = '../tests/data/hot_output/' \
'hot_artifact.yaml'
params = {}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_without_tosca_os_version(self):
tosca_file = '../tests/data/' \
hot_file = '../tests/data/hot_output/' \
'hot_single_server_without_tosca_os_version.yaml'
params = {}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_helloworld_with_userkey(self):
tosca_file = '../tests/data/tosca_helloworld.yaml'
hot_file = '../tests/data/hot_output/hot_hello_world_userkey.yaml'
params = {'key_name': 'userkey'}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_custom_networks_nodes_inline(self):
tosca_file = '../tests/data/network/' \
hot_file = '../tests/data/hot_output/network/' \
'hot_custom_network_nodes.yaml'
params = {}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_custom_networks_nodes_imports(self):
tosca_file = '../tests/data/network/' \
hot_file = '../tests/data/hot_output/network/' \
'hot_custom_network_nodes.yaml'
params = {}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_nfv_sample(self):
- tosca_file = '../tests/data/test_tosca_nfv_sample.yaml'
- hot_file = '../tests/data/hot_output/hot_nfv_sample.yaml'
+ tosca_file = '../tests/data/nfv/test_tosca_nfv_sample.yaml'
+ hot_file = '../tests/data/hot_output/nfv/hot_nfv_sample.yaml'
params = {}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_translate_policy(self):
- tosca_file = '../tests/data/tosca_policies.yaml'
- hot_file = '../tests/data/hot_output/hot_policies.yaml'
+ tosca_file = '../tests/data/policies/tosca_policies.yaml'
+ hot_file = '../tests/data/hot_output/policies/hot_policies.yaml'
params = {}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
def test_hot_script_types(self):
- tosca_file = '../tests/data/test_tosca_script_types.yaml'
+ tosca_file = '../tests/data/interfaces/test_tosca_script_types.yaml'
hot_file = '../tests/data/hot_output/hot_script_types.yaml'
params = {}
- diff = TranslationUtils.compare_tosca_translation_with_hot(tosca_file,
- hot_file,
- params)
- self.assertEqual({}, diff, '<difference> : ' +
- json.dumps(diff, indent=4, separators=(', ', ': ')))
+ self._test_successful_translation(tosca_file, hot_file, params)
+
+ def test_hot_interface_on_compute(self):
+ tosca_file = '../tests/data/interfaces/' \
+ 'test_tosca_interface_on_compute.yaml'
+ hot_file = '../tests/data/hot_output/interfaces/' \
+ 'hot_interface_on_compute.yaml'
+ params = {}
+ self._test_successful_translation(tosca_file, hot_file, params)
+
+ def test_hot_get_functions_semantic(self):
+ tosca_file = '../tests/data/test_tosca_get_functions_semantic.yaml'
+ hot_file = '../tests/data/hot_output/hot_get_functions_semantic.yaml'
+ params = {}
+ self._test_successful_translation(tosca_file, hot_file, params)
+
+ def test_hot_exchange_public_ssh_key(self):
+ tosca_file = '../tests/data/tosca_exchange_public_ssh_key.yaml'
+ hot_file = '../tests/data/hot_output/hot_exchange_public_ssh_key.yaml'
+ params = {}
+ self._test_successful_translation(tosca_file, hot_file, params)
+
+ def test_hot_translate_scaling_policy(self):
+ tosca_file = '../tests/data/autoscaling/tosca_autoscaling.yaml'
+ hot_files = [
+ '../tests/data/hot_output/autoscaling/hot_autoscaling.yaml',
+ '../tests/data/hot_output/autoscaling/asg_res.yaml',
+ ]
+ params = {}
+ self._test_successful_translation(tosca_file, hot_files, params)
+
+ def test_translate_unsupported_tosca_type(self):
+ tosca_file = '../tests/data/test_tosca_unsupported_type.yaml'
+ tosca_tpl = os.path.normpath(os.path.join(
+ os.path.dirname(os.path.abspath(__file__)), tosca_file))
+ params = {}
+ expected_msg = _('Type "tosca.nodes.LoadBalancer" is valid TOSCA '
+ 'type but translation support is not yet available.')
+ tosca = ToscaTemplate(tosca_tpl, params, True)
+ err = self.assertRaises(UnsupportedTypeError,
+ TOSCATranslator(tosca, params)
+ .translate)
+ self.assertEqual(expected_msg, err.__str__())
+
+ def _translate_nodetemplates(self):
+ tosca_file = '../tests/data/autoscaling/tosca_cluster_autoscaling.yaml'
+ hot_file = '../tests/data/hot_output/autoscaling/' \
+ 'hot_cluster_autoscaling.yaml'
+ params = {}
+ self._test_successful_translation(tosca_file, hot_file, params)
+
+ def test_hot_translate_nfv_scaling(self):
+ tosca_file = '../tests/data/nfv/test_tosca_nfv_autoscaling.yaml'
+ hot_files = [
+ '../tests/data/hot_output/nfv/hot_tosca_nfv_autoscaling.yaml',
+ '../tests/data/hot_output/nfv/SP1_res.yaml',
+ ]
+ params = {}
+ self._test_successful_translation(tosca_file, hot_files, params)
self.assertFalse(self.UrlUtils.validate_url("github.com"))
self.assertFalse(self.UrlUtils.validate_url("123"))
self.assertFalse(self.UrlUtils.validate_url("a/b/c"))
+
+ def test_get_dict_value(self):
+ single_snippet = \
+ {'nodejs_create_config':
+ {'type': 'tosca.nodes.SoftwareConfig',
+ 'properties':
+ {'config':
+ {'get_file': 'create.sh'}}}}
+ actual_output_single_snippet = []
+ ex_output_single_snippet = ['create.sh']
+ translator.common.utils.get_dict_value(single_snippet, "get_file",
+ actual_output_single_snippet)
+ self.assertEqual(actual_output_single_snippet,
+ ex_output_single_snippet)
+ multi_snippet = \
+ {'resources':
+ {'nodejs_create_config':
+ {'type': 'tosca.nodes.SoftwareConfig',
+ 'properties':
+ {'config':
+ {'get_file': 'nodejs/create.sh'}}},
+ 'mongodb_create_config':
+ {'type': 'tosca.nodes.SoftwareConfig',
+ 'properties':
+ {'config':
+ {'get_file': 'mongodb/create.sh'}}}}}
+
+ actual_output_multi_snippet = []
+ ex_output_multi_snippet = ['mongodb/create.sh',
+ 'nodejs/create.sh']
+ translator.common.utils.get_dict_value(multi_snippet, "get_file",
+ actual_output_multi_snippet)
+ self.assertEqual(sorted(actual_output_multi_snippet),
+ ex_output_multi_snippet)