1, Save the below following YAML to dpdk.yaml.
::
- apiVersion: v1
- kind: Pod
- metadata:
- name: dpdk
- spec:
- nodeSelector:
- beta.kubernetes.io/arch: arm64
- containers:
- - name: dpdk
- image: younglook/dpdk:arm64
- command: [ "bash", "-c", "/usr/bin/l2fwd --huge-unlink -l 6-7 -n 4 --file-prefix=container -- -p 3" ]
- stdin: true
- tty: true
- securityContext:
- privileged: true
- volumeMounts:
- - mountPath: /dev/vfio
- name: vfio
- - mountPath: /mnt/huge
- name: huge
- volumes:
- - name: vfio
- hostPath:
- path: /dev/vfio
- - name: huge
- hostPath:
- path: /mnt/huge
+ .. code-block:: yaml
+
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: dpdk
+ spec:
+ nodeSelector:
+ beta.kubernetes.io/arch: arm64
+ containers:
+ - name: dpdk
+ image: younglook/dpdk:arm64
+ command: [ "bash", "-c", "/usr/bin/l2fwd --huge-unlink -l 6-7 -n 4 --file-prefix=container -- -p 3" ]
+ stdin: true
+ tty: true
+ securityContext:
+ privileged: true
+ volumeMounts:
+ - mountPath: /dev/vfio
+ name: vfio
+ - mountPath: /mnt/huge
+ name: huge
+ volumes:
+ - name: vfio
+ hostPath:
+ path: /dev/vfio
+ - name: huge
+ hostPath:
+ path: /mnt/huge
2, Create Pod
Introduction
============
-.. _sriov_cni: https://github.com/hustcat/sriov-cni
-.. _Flannel: https://github.com/coreos/flannel
-.. _Multus: https://github.com/Intel-Corp/multus-cni
-.. _cni: https://github.com/containernetworking/cni
-.. _kubeadm: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
-.. _k8s-crd: https://kubernetes.io/docs/concepts/api-extension/custom-resources/
-.. _arm64: https://github.com/kubernetes/website/pull/6511
-.. _files: https://github.com/kubernetes/website/pull/6511/files
+.. _sriov_cni: https://github.com/hustcat/sriov-cni
+.. _Flannel: https://github.com/coreos/flannel
+.. _Multus: https://github.com/Intel-Corp/multus-cni
+.. _cni-description: https://github.com/containernetworking/cni
+.. _kubeadm: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
+.. _k8s-crd: https://kubernetes.io/docs/concepts/api-extension/custom-resources/
+.. _arm64: https://github.com/kubernetes/website/pull/6511
+.. _files: https://github.com/kubernetes/website/pull/6511/files
As we know, in some cases we need to deploy multiple network interfaces
here we name it as rbac.yaml:
::
- apiVersion: rbac.authorization.k8s.io/v1beta1
- kind: ClusterRoleBinding
- metadata:
- name: fabric8-rbac
- subjects:
- - kind: ServiceAccount
- # Reference to upper's `metadata.name`
- name: default
- # Reference to upper's `metadata.namespace`
- namespace: default
- roleRef:
- kind: ClusterRole
- name: cluster-admin
- apiGroup: rbac.authorization.k8s.io
+ .. code-block:: yaml
+
+ apiVersion: rbac.authorization.k8s.io/v1beta1
+ kind: ClusterRoleBinding
+ metadata:
+ name: fabric8-rbac
+ subjects:
+ - kind: ServiceAccount
+ # Reference to upper's `metadata.name`
+ name: default
+ # Reference to upper's `metadata.namespace`
+ namespace: default
+ roleRef:
+ kind: ClusterRole
+ name: cluster-admin
+ apiGroup: rbac.authorization.k8s.io
command:
Here we name it as crdnetwork.yaml:
::
- apiVersion: apiextensions.k8s.io/v1beta1
- kind: CustomResourceDefinition
- metadata:
- # name must match the spec fields below, and be in the form: <plural>.<group>
- name: networks.kubernetes.com
- spec:
- # group name to use for REST API: /apis/<group>/<version>
- group: kubernetes.com
- # version name to use for REST API: /apis/<group>/<version>
- version: v1
- # either Namespaced or Cluster
- scope: Namespaced
- names:
- # plural name to be used in the URL: /apis/<group>/<version>/<plural>
- plural: networks
- # singular name to be used as an alias on the CLI and for display
- singular: network
- # kind is normally the CamelCased singular type. Your resource manifests use this.
- kind: Network
- # shortNames allow shorter string to match your resource on the CLI
- shortNames:
- - net
+ .. code-block:: yaml
+
+ apiVersion: apiextensions.k8s.io/v1beta1
+ kind: CustomResourceDefinition
+ metadata:
+ # name must match the spec fields below, and be in the form: <plural>.<group>
+ name: networks.kubernetes.com
+ spec:
+ # group name to use for REST API: /apis/<group>/<version>
+ group: kubernetes.com
+ # version name to use for REST API: /apis/<group>/<version>
+ version: v1
+ # either Namespaced or Cluster
+ scope: Namespaced
+ names:
+ # plural name to be used in the URL: /apis/<group>/<version>/<plural>
+ plural: networks
+ # singular name to be used as an alias on the CLI and for display
+ singular: network
+ # kind is normally the CamelCased singular type. Your resource manifests use this.
+ kind: Network
+ # shortNames allow shorter string to match your resource on the CLI
+ shortNames:
+ - net
command:
Here we name it as flannel-network.yaml:
::
- apiVersion: "kubernetes.com/v1"
- kind: Network
- metadata:
- name: flannel-conf
- plugin: flannel
- args: '[
- {
- "masterplugin": true,
- "delegate": {
- "isDefaultGateway": true
- }
- }
- ]'
+ .. code-block:: yaml
+
+ apiVersion: "kubernetes.com/v1"
+ kind: Network
+ metadata:
+ name: flannel-conf
+ plugin: flannel
+ args: '[
+ {
+ "masterplugin": true,
+ "delegate": {
+ "isDefaultGateway": true
+ }
+ }
+ ]'
command:
Here we name it as sriov-network.yaml:
::
- apiVersion: "kubernetes.com/v1"
- kind: Network
- metadata:
- name: sriov-conf
- plugin: sriov
- args: '[
- {
- "master": "eth1",
- "pfOnly": true,
- "ipam": {
- "type": "host-local",
- "subnet": "192.168.123.0/24",
- "rangeStart": "192.168.123.2",
- "rangeEnd": "192.168.123.10",
- "routes": [
- { "dst": "0.0.0.0/0" }
- ],
- "gateway": "192.168.123.1"
- }
- }
- ]'
+ .. code-block:: yaml
+
+ apiVersion: "kubernetes.com/v1"
+ kind: Network
+ metadata:
+ name: sriov-conf
+ plugin: sriov
+ args: '[
+ {
+ "master": "eth1",
+ "pfOnly": true,
+ "ipam": {
+ "type": "host-local",
+ "subnet": "192.168.123.0/24",
+ "rangeStart": "192.168.123.2",
+ "rangeEnd": "192.168.123.10",
+ "routes": [
+ { "dst": "0.0.0.0/0" }
+ ],
+ "gateway": "192.168.123.1"
+ }
+ }
+ ]'
command:
CNI Installation
================
.. _CNI: https://github.com/containernetworking/plugins
-Firstly, we should deploy all CNI plugins. The build process is following:
+Firstly, we should deploy all CNI plugins. The build process is following:
::
git clone https://github.com/containernetworking/plugins.git
CNIs are put.
.. _SRIOV: https://github.com/hustcat/sriov-cni
+
The build process of it is as:
::
as multus-cni.conf:
::
- {
- "name": "minion-cni-network",
- "type": "multus",
- "kubeconfig": "/etc/kubernetes/admin.conf",
- "delegates": [{
- "type": "flannel",
- "masterplugin": true,
- "delegate": {
- "isDefaultGateway": true
- }
- }]
- }
+ .. code-block:: json
+
+ {
+ "name": "minion-cni-network",
+ "type": "multus",
+ "kubeconfig": "/etc/kubernetes/admin.conf",
+ "delegates": [{
+ "type": "flannel",
+ "masterplugin": true,
+ "delegate": {
+ "isDefaultGateway": true
+ }
+ }]
+ }
command:
In this case flannle-conf network object act as the primary network.
::
- apiVersion: v1
- kind: Pod
- metadata:
- name: pod-sriov
- annotations:
- networks: '[
- { "name": "flannel-conf" },
- { "name": "sriov-conf" }
- ]'
- spec: # specification of the pod's contents
- containers:
- - name: pod-sriov
- image: "busybox"
- command: ["top"]
- stdin: true
- tty: true
+ .. code-block:: yaml
+
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: pod-sriov
+ annotations:
+ networks: '[
+ { "name": "flannel-conf" },
+ { "name": "sriov-conf" }
+ ]'
+ spec: # specification of the pod's contents
+ containers:
+ - name: pod-sriov
+ image: "busybox"
+ command: ["top"]
+ stdin: true
+ tty: true
2, Create Pod
=====================
::
- # kubectl exec pod-sriov -- ip a
- 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
- link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
- inet 127.0.0.1/8 scope host lo
- valid_lft forever preferred_lft forever
- inet6 ::1/128 scope host
- valid_lft forever preferred_lft forever
- 3: eth0@if124: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
- link/ether 0a:58:0a:e9:40:2a brd ff:ff:ff:ff:ff:ff
- inet 10.233.64.42/24 scope global eth0
- valid_lft forever preferred_lft forever
- inet6 fe80::8e6:32ff:fed3:7645/64 scope link
- valid_lft forever preferred_lft forever
- 4: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
- link/ether 52:54:00:d4:d2:e5 brd ff:ff:ff:ff:ff:ff
- inet 192.168.123.2/24 scope global net0
- valid_lft forever preferred_lft forever
- inet6 fe80::5054:ff:fed4:d2e5/64 scope link
- valid_lft forever preferred_lft forever
+ .. code-block:: bash
+
+ # kubectl exec pod-sriov -- ip a
+ 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ inet 127.0.0.1/8 scope host lo
+ valid_lft forever preferred_lft forever
+ inet6 ::1/128 scope host
+ valid_lft forever preferred_lft forever
+ 3: eth0@if124: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
+ link/ether 0a:58:0a:e9:40:2a brd ff:ff:ff:ff:ff:ff
+ inet 10.233.64.42/24 scope global eth0
+ valid_lft forever preferred_lft forever
+ inet6 fe80::8e6:32ff:fed3:7645/64 scope link
+ valid_lft forever preferred_lft forever
+ 4: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
+ link/ether 52:54:00:d4:d2:e5 brd ff:ff:ff:ff:ff:ff
+ inet 192.168.123.2/24 scope global net0
+ valid_lft forever preferred_lft forever
+ inet6 fe80::5054:ff:fed4:d2e5/64 scope link
+ valid_lft forever preferred_lft forever
Contacts
========
image to start the Flannel service.
.. image:: images/multi_flannel_intfs.PNG
- :alt: 2 Flannel interfaces deployment scenario
- :figclass: align-center
+ :width: 800px
+ :alt: 2 Flannel interfaces deployment scenario
- Fig 1. Multiple Flannel interfaces deployment architecture
+Fig 1. Multiple Flannel interfaces deployment architecture
.. _Etcd: https://coreos.com/etcd/
.. include:: files/kube-2flannels.yml
:literal:
- kube-2flannels.yml
+kube-2flannels.yml
ConfigMap Added
includes a new net-conf.json from the 1st:
::
- net-conf.json: |
- {
- "Network": "10.3.0.0/16",
- "Backend": {
- "Type": "udp",
- "Port": 8286
+ .. code-block:: json
+
+ net-conf.json: |
+ {
+ "Network": "10.3.0.0/16",
+ "Backend": {
+ "Type": "udp",
+ "Port": 8286
+ }
}
- }
2nd Flannel Container Added
For the 2nd Flannel container, we use the command as:
::
- - name: kube-flannel2
- image: quay.io/coreos/flannel:v0.8.0-arm64
- command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--subnet-file=/run/flannel/subnet2.env" ]
+ .. code-block:: yaml
+
+ - name: kube-flannel2
+ image: quay.io/coreos/flannel:v0.8.0-arm64
+ command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--subnet-file=/run/flannel/subnet2.env" ]
which outputs the subnet file to /run/flannel/subnet2.env for the 2nd Flannel CNI to use.
And mount the 2nd Flannel ConfigMap to /etc/kube-flannel/ for the 2nd Flanneld container process:
::
- volumeMounts:
- - name: run
- mountPath: /run
- - name: flannel-cfg2
- mountPath: /etc/kube-flannel/
+ .. code-block:: yaml
+
+ volumeMounts:
+ - name: run
+ mountPath: /run
+ - name: flannel-cfg2
+ mountPath: /etc/kube-flannel/
CNI Configuration
as 10-2flannels.conf:
::
- {
- "name": "flannel-networks",
- "type": "multus",
- "delegates": [
- {
- "type": "flannel",
- "name": "flannel.2",
- "subnetFile": "/run/flannel/subnet2.env",
- "dataDir": "/var/lib/cni/flannel/2",
- "delegate": {
- "bridge": "kbr1",
- "isDefaultGateway": false
- }
- },
- {
- "type": "flannel",
- "name": "flannel.1",
- "subnetFile": "/run/flannel/subnet.env",
- "dataDir": "/var/lib/cni/flannel",
- "masterplugin": true,
- "delegate": {
- "bridge": "kbr0",
- "isDefaultGateway": true
- }
- }
- ]
- }
+ .. code-block:: json
+
+ {
+ "name": "flannel-networks",
+ "type": "multus",
+ "delegates": [
+ {
+ "type": "flannel",
+ "name": "flannel.2",
+ "subnetFile": "/run/flannel/subnet2.env",
+ "dataDir": "/var/lib/cni/flannel/2",
+ "delegate": {
+ "bridge": "kbr1",
+ "isDefaultGateway": false
+ }
+ },
+ {
+ "type": "flannel",
+ "name": "flannel.1",
+ "subnetFile": "/run/flannel/subnet.env",
+ "dataDir": "/var/lib/cni/flannel",
+ "masterplugin": true,
+ "delegate": {
+ "bridge": "kbr0",
+ "isDefaultGateway": true
+ }
+ }
+ ]
+ }
For the 2nd Flannel CNI, it will use the subnet file /run/flannel/subnet2.env instead of the default /run/flannel/subnet.env,
which is generated by the 2nd Flanneld process, and the subnet data would be output to the directory:
backend:
::
- ...
- containers:
- - name: kube-flannel
- image: quay.io/coreos/flannel:v0.8.0-arm64
- command: [ "/opt/bin/flanneld", "--ip-masq", "--etcd-endpoints=http://ETCD_CLUSTER_IP1:2379", "--etcd-prefix=/coreos.com/network" ]
- securityContext:
- privileged: true
- env:
- - name: POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: POD_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- volumeMounts:
- - name: run
- mountPath: /run
- - name: flannel-cfg
- mountPath: /etc/kube-flannel/
+ .. code-block:: yaml
+
+ ...
+ containers:
+ - name: kube-flannel
+ image: quay.io/coreos/flannel:v0.8.0-arm64
+ command: [ "/opt/bin/flanneld", "--ip-masq", "--etcd-endpoints=http://ETCD_CLUSTER_IP1:2379", "--etcd-prefix=/coreos.com/network" ]
+ securityContext:
+ privileged: true
+ env:
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ volumeMounts:
+ - name: run
+ mountPath: /run
+ - name: flannel-cfg
+ mountPath: /etc/kube-flannel/
Here as we don't use the "--kube-subnet-mgr" option, the last 2 lines of
::
- - name: flannel-cfg
+ .. code-block:: yaml
+
+ - name: flannel-cfg
mountPath: /etc/kube-flannel/
can be ignored.
the 1st Flanneld container:
::
- containers:
- - name: kube-flannel2
- image: quay.io/coreos/flannel:v0.8.0-arm64
- command: [ "/opt/bin/flanneld", "--ip-masq", "--etcd-endpoints=http://ETCD_CLUSTER_IP1:2379", "--etcd-prefix=/coreos.com/network2", "--subnet-file=/run/flannel/subnet2.env" ]
- securityContext:
- privileged: true
- env:
- - name: POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: POD_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- volumeMounts:
- - name: run
- mountPath: /run
+ .. code-block:: yaml
+
+ containers:
+ - name: kube-flannel2
+ image: quay.io/coreos/flannel:v0.8.0-arm64
+ command: [ "/opt/bin/flanneld", "--ip-masq", "--etcd-endpoints=http://ETCD_CLUSTER_IP1:2379", "--etcd-prefix=/coreos.com/network2", "--subnet-file=/run/flannel/subnet2.env" ]
+ securityContext:
+ privileged: true
+ env:
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ volumeMounts:
+ - name: run
+ mountPath: /run
The option "-subnet-file" for the 2nd Flanneld is to output a subnet file for the 2nd Flannel subnet configuration
of the Flannel CNI which is called by Multus CNI.
.. (c) Gergely Csatari (Nokia)
==================================
-OpenRetriever architecture options
+Container4NFV architecture options
==================================
-1 Architecture options to support only containers on bare metal
-...............................................................
-To support containers on bare metal without the support of VM-s only a single
-VIM is needed.
-This architecture option is targeted by OpenRetriever in OPNFV Euphrates, and
-this architecture option is considered in the gap analyzis against
-:doc:`OpenStack <gap-analysis-openstack>` and
-:doc:`Kubernetes <gap-analysis-kubernetes-v1.5>`.
-Examples: Kubernetes, OpenStack with Zun_ and Kuryr_, which as a side effect
-also supports VM-s.
-
-2 Architecture options to support containers and VM-s
-.....................................................
-There are different architecture options to support container based and VM based
-VNF-s in OPNFV. This section provides a list of these options with a brief
-description and examples.
-In the descriptions providing the same API means, that the same set of API-s are
-used (like the OpenStack_ API-s or the Kubernetes_ API), integrted networks mean,
-that the network connections of the workloads can be connected without leaving
-the domain of the VIM and shared hardware resources mean that it is possible to
-start a workload VM and a workload container on the same physical host.
-
-2.1 Separate VIM-s
-==================
-There is a separate VIM for VM-s and a separate for containers, they use
-different hardware pools, they provide different API-s and their networks are
-not integrated.
-Examples: A separate OpenStack instance for the VM-s and a separate Kubernetes_
-instance for the containers.
-
-2.2 Single VIM
-==============
-One VIM supports both VM-s and containers using the same hardware pool, with
-the same API and with integrated networking solution.
-Examples: OpenStack with Zun_ and Kuryr_ or Kubernetes_ with Kubevirt_, Virtlet_ or
-similar.
-
-2.3 Combined VIM-s
-==================
-There are two VIM-s from API perspective, but usually the VIM-s share hardware
-pools on some level. This option have suboptions.
-
-2.3.1 Container VIM on VM VIM
------------------------------
-A container VIM is deployed on top of resources managed by a VM VIM, they share
-the same hardware pool, but they have separate domains in the pool, they provide
-separate API-s and there are possibilities to integrate their networks.
-Example: A Kubernetes_ is deployed into VM-s or bare metal hosts into an
-OpenStack deployment optionally with Magnum. Kuryr_ integrates the networks on
-some level.
-
-2.3.2 VM VIM on Container VIM
------------------------------
-A VM VIM is deployed on top of resources managed by a container VIM, they do not
-share the same hardware pool, provide different API and do not have integrated
-networks.
-Example: `Kolla Kubernetes <https://github.com/openstack/kolla-kubernetes>`_ or
-`OpenStack Helm <https://wiki.openstack.org/wiki/Openstack-helm>`_.
-
-.. _Kubernetes: http://kubernetes.io/
-.. _Kubevirt: https://github.com/kubevirt/
-.. _Kuryr: https://docs.openstack.org/developer/kuryr/
-.. _OpenStack: https://www.openstack.org/
-.. _Virtlet: https://github.com/Mirantis/virtlet
-.. _Zun: https://wiki.openstack.org/wiki/Zun
+Analyzis of the architecture options were moved to the
+`Container4NFV wiki <https://wiki.opnfv.org/display/OpenRetriever/Analyzis+of+architecture+options>`_.
\ No newline at end of file
.. (c) Xuan Jia (China Mobile)
================================================
-OpenRetriever Gap Analysis with Kubernetes v1.5
+Container4NFV Gap Analysis with Kubernetes v1.5
================================================
-This section provides users with OpenRetriever gap analysis regarding feature
+This section provides users with Container4NFV gap analysis regarding feature
requirement with Kubernetes Official Release. The following table lists the use
cases / feature requirements of container integrated functionality, and its gap
analysis with Kubernetes Official Release.
.. (c) Xuan Jia (China Mobile), Gergely Csatari (Nokia)
=========================================
-OpenRetriever Gap Analysis with OpenStack
+Container4NFV Gap Analysis with OpenStack
=========================================
-This section provides a gap analyzis between the targets of OpenRetriever for
+This section provides a gap analyzis between the targets of Container4NFV for
release Euphrates (E) or later and the features provided by OpenStack in release
Ocata. As the OPNFV and OpenStack releases tend to change over time this
analyzis is planned to be countinously updated.
|Kuryr_ needs to support MACVLAN and IPVLAN |Kuryr_ |Using MACVLAN or IPVLAN could provide better network performance. |Open |
| | |It is planned for Ocata. | |
+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+----------------+
- |Kuryr_ Kubernetes_ integration is needed |Kuryr_ |It is done in the frame of OpenRetriever. |Targeted to |
+ |Kuryr_ Kubernetes_ integration is needed |Kuryr_ |It is done in the frame of Container4NFV. |Targeted to |
| | | |OPNFV release E |
| | | |/OpenStack Ocata|
+-----------------------------------------------------------+-------------------+--------------------------------------------------------------------+----------------+
.. (c) Xuan Jia (China Mobile)
===============================================
-OpenRetriever Gap Analysis with OPNFV Installer
+Container4NFV Gap Analysis with OPNFV Installer
===============================================
-This section provides users with OpenRetriever gap analysis regarding feature
+This section provides users with Container4NFV gap analysis regarding feature
requirement with OPNFV Installer in Danube Official Release. The following
table lists the use cases / feature requirements of container integrated
functionality, and its gap analysis with OPNFV Installer in Danube Official
.. (c) Xuan Jia (China Mobile)
==========================
-OpenRetriever Gap Analysis
+Container4NFV Gap Analysis
==========================
-:Project: OpenRetriever, https://wiki.opnfv.org/display/openretriever
+:Project: Container4NFV, https://wiki.opnfv.org/display/OpenRetriever/Container4NFV
-:Editors: Xuan Jia (China Mobile)
-:Authors: Xuan Jia (China Mobile)
+:Editors: Xuan Jia (China Mobile), Gergely Csatari (Nokia)
+:Authors: Container4NFV team
:Abstract: This document provides the users with top-down gap analysis regarding
OpenRetriever feature requirements with OPNFV Installer, OpenStack
.. License.http://creativecommons.org/licenses/by/4.0
.. (c) Xuan Jia (China Mobile)
-==========================================================================
+===========================================================================
OpenRetriever Next Gen VIM & Edge Computing Scheduler Requirements Document
===========================================================================
- Support legacy and event-driven scheduling
- By legacy scheduling we mean scheduling without any trigger (see above)
-i.e. the current technique used by schedulers such as OpenStack Nova.
+ i.e. the current technique used by schedulers such as OpenStack Nova.
- By event-driven scheduling we mean scheduling with a trigger (see above).
-We do not mean that the unikernel or container that is going to run the VNF is
-already running . The instance is started and torn-down in response to traffic.
-The two step process is transparent to the user.
+ We do not mean that the unikernel or container that is going to run the VNF is
+ already running . The instance is started and torn-down in response to traffic.
+ The two step process is transparent to the user.
- More specialized higher level schedulers and orchestration systems may be
-run on top e.g. FaaS (similar to AWS Lambda) etc.
+ run on top e.g. FaaS (similar to AWS Lambda) etc.
+----------------------------------------------------------------------------------------+
| Serverless vs. FaaS vs. Event-Driven Terminology |
| | - Support shared storage (e.g. OpenStack |
| | Cinder, K8s volumes etc.) |
+----------------------------------------+-----------------------------------------------+
+
.. [1]
Intel EPA includes DPDK, SR-IOV, CPU and NUMA pinning, Huge Pages
etc.
.. License. http://creativecommons.org/licenses/by/4.0
.. (c) Xuan Jia (China Mobile)
-==========================
+===========================
Container4NFV Release Notes
-==========================
+===========================
:Project: OpenRetriever, https://wiki.opnfv.org/display/openretriever
.. (c) Xuan Jia (China Mobile)
Senario:
-==========================
+========
k8-nosdn-nofeature-noha
--------------------------
k8-nosdn-lb-noha
--------------------------
+----------------
+
Using Joid to deploy Kubernetes in bare metal machine with load balance enabled
https://build.opnfv.org/ci/job/joid-k8-nosdn-lb-noha-baremetal-daily-euphrates/
YardStick test Cases
-==========================
+====================
opnfv_yardstick_tc080
---------------------------
+---------------------
+
measure network latency between containers in k8s using ping
https://git.opnfv.org/yardstick/tree/tests/opnfv/test_cases/opnfv_yardstick_tc080.yaml
- opnfv_yardstick_tc081
---------------------------
+opnfv_yardstick_tc081
+---------------------
+
measure network latency between container and VM using ping
https://git.opnfv.org/yardstick/tree/tests/opnfv/test_cases/opnfv_yardstick_tc081.yaml