diff --git a/onramp/blueprints.rst b/onramp/blueprints.rst index 146455d..8b97a65 100644 --- a/onramp/blueprints.rst +++ b/onramp/blueprints.rst @@ -120,22 +120,27 @@ of ``vars/main.yml``: .. code-block:: upf: - ip_prefix: "192.168.252.0/24" - iface: "access" + access_subnet: "192.168.252.1/24" # access subnet & gateway + core_subnet: "192.168.250.1/24" # core subnet & gateway helm: chart_ref: aether/bess-upf values_file: "deps/5gc/roles/upf/templates/upf-5g-values.yaml" + default_upf: + ip: + access: "192.168.252.3" + core: "192.168.250.3" + ue_ip_pool: "172.250.0.0/16" additional_upfs: - "1": - ip: - access: "192.168.252.6/24" - core: "192.168.250.6/24" - ue_ip_pool: "172.248.0.0/16" - # "2": - # ip: - # access: "192.168.252.7/24" - # core: "192.168.250.7/24" - # ue_ip_pool: "172.247.0.0/16" + "1": + ip: + access: "192.168.252.6" + core: "192.168.250.6" + ue_ip_pool: "172.248.0.0/16" + # "2": + # ip: + # access: "192.168.252.7" + # core: "192.168.250.7" + # ue_ip_pool: "172.247.0.0/16" As shown above, one additional UPF is enabled (beyond ``upf-0`` that already came up as part of SD-Core), with the spec for yet another UPF @@ -454,8 +459,8 @@ section: .. code-block:: upf: - ip_prefix: "192.168.252.0/24" - iface: "access" + access_subnet: "192.168.252.1/24" # access subnet & gateway + core_subnet: "192.168.250.1/24" # core subnet & gateway mode: dpdk # Options: af_packet or dpdk # If mode set to 'dpdk': # - make sure at least two VF devices are created out of 'data_iface' diff --git a/onramp/directory.rst b/onramp/directory.rst index 206fe5e..b94cf95 100644 --- a/onramp/directory.rst +++ b/onramp/directory.rst @@ -13,7 +13,7 @@ up to speed on the rest of the system. .. admonition:: Troubleshooting Hint Users are encouraged to join the ``#aether-onramp`` channel of the - `ONF Workspace `__ on Slack, where + `Aether Workspace `__ on Slack, where questions about using OnRamp to bring up Aether are asked and answered. The ``Troubleshooting`` bookmark for that channel includes summaries of known issues. @@ -94,16 +94,9 @@ strategy of the original mechanism. OnRamp Repos ~~~~~~~~~~~~~~~~~~~ -The process to deploy the artifacts listed above manages the -*Continuous Deployment (CD)* half of the CI/CD pipeline. OnRamp uses a -different mechanism than the one the ONF ops team originally used to -manage its multi-site deployment of Aether. The latter approach has a -large startup cost, which has proven difficult to replicate. (It also -locks you into deployment toolchain that may or may not be appropriate -for your situation.) - -In its place, OnRamp adopts minimal Ansible tooling. This makes it -easier to take ownership of the configuration parameters that define +OnRamp adopts minimal Ansible tooling for the *Continuous Deployment +(CD)* half of the CI/CD pipeline. The approach is designed to make it +easy to take ownership of the configuration parameters that define your specific deployment scenario. The rest of this guide walks you through a step-by-step process of deploying and operating Aether on your own hardware. For now, we simply point you at the collection of diff --git a/onramp/overview.rst b/onramp/overview.rst index 1f64e5a..60b84ef 100644 --- a/onramp/overview.rst +++ b/onramp/overview.rst @@ -30,8 +30,8 @@ all the degrees-of-freedom Aether supports. Aether OnRamp is still a work in progress, but anyone interested in participating in that effort is encouraged to join the -discussion on Slack in the `ONF Community Workspace -`__. A roadmap for the work that +discussion on Slack in the `Aether Community Workspace +`__. A roadmap for the work that needs to be done can be found in the `Aether OnRamp Wiki `__. diff --git a/onramp/ref.rst b/onramp/ref.rst index c893fc7..895184d 100644 --- a/onramp/ref.rst +++ b/onramp/ref.rst @@ -10,7 +10,7 @@ deployments of Aether. Blueprint Specification ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The specification for every Aether blueprint is rooted in an Ansible +The specification for every Aether blueprint is anchored in an Ansible variable file (e.g., ``vars/main-blueprint.yml``). Most blueprints also include a Jenkins pipeline (e.g., ``blueprint.groovy``) that illustrates how the blueprint is deployed and validated. @@ -176,13 +176,13 @@ Ansible inventory file (``hosts.ini``). The following identifies the * - `[worker_nodes]` - Worker servers in Kubernetes Cluster. * - `[gnbsim_nodes]` - - Servers hosting gNBsim container(s). + - Servers hosting gNBsim containers. * - `[ueransim_nodes]` - Servers hosting UERANSIM process. * - `[oai_nodes]` - - Servers hosting OAI gNB (and optionally UE) container(s). + - Servers hosting OAI gNB (and optionally UE) containers. * - `[srsran_nodes]` - - Servers hosting srsRAN gNB (and optionally UE) container(s). + - Servers hosting srsRAN gNB (and optionally UE) containers. The `[worker_nodes]` group can be empty, but must be present. The other groups are blueprint-specific, and with the exception of @@ -292,10 +292,9 @@ Network Subnets ~~~~~~~~~~~~~~~~~~~~~~ OnRamp configures a set of subnets in support of a given Aether -deployment. The following subnets are defined in ``vars/main.yml``; -they do not typically need to be modified to deploy a blueprint. -Not shown below, subnet ``10.76.28.0/24`` is used as an exemplar -for the local network throughout the OnRamp documentation. +deployment. The following subnets are defined in ``vars/main.yml``. +With the exception of ``core.ran_subnet``, these variables typically +do not need to be modified for an initial deployment of a blueprint. .. list-table:: :widths: 20 25 50 @@ -308,18 +307,22 @@ for the local network throughout the OnRamp documentation. - ``aether.ran_subnet`` - Assigned to container-based gNBs connecting to the Core. Other gNB implementations connect to the Core over the subnet - assigned to the server's physical interface (as denoted by + assigned to the server's physical interface (as defined by ``core.data_iface``). * - `192.168.250.0/24` - - ``core.default_upf.ip.core`` + - ``core.upf.core_subnet`` - Assigned to `core` bridge that connects UPF(s) to the Internet. * - `192.168.252.0/24` - - ``core.default_upf.ip.access`` + - ``core.upf.access_subnet`` - Assigned to `access` bridge that connects UPF(s) to the RAN. * - `172.250.0.0/16` - ``core.default_upf.ue_ip_pool`` - - Assigned (by the Core) to UEs connecting to Aether. - -Note that when multiple UPFs are deployed—in addition to -``core.default_upf``\ —each is assigned its own ``ip.core``, -``ip.access``, and ``ue_ip_pool`` subnets. + - Assigned (by the Core) to UEs connecting to Aether. When + multiple UPFs are deployed—in addition to + ``core.default_upf``\ —each is assigned its own ``ue_ip_pool`` + subnet. + * - `10.76.28.0/24` + - N/A + - Used throughout OnRamp documentation as an exemplar for the + local subnet on which Aether severs and radios are deployed. + Corresponds to the network interface defined by variable ``core.data_iface``. diff --git a/onramp/start.rst b/onramp/start.rst index e138c74..4b027ea 100644 --- a/onramp/start.rst +++ b/onramp/start.rst @@ -452,7 +452,7 @@ block defines a set of parameters for ``pdusessest`` (also known as execInParallel: false startImsi: 208930100007487 ueCount: 5 - defaultAs: "192.168.250.1" + defaultAs: "{{ ping_target }}" perUserTimeout: 100 plmnId: mcc: 208 @@ -466,7 +466,7 @@ You can edit ``ueCount`` to change the number of UEs included in the emulation (currently limited to 100) and you can set ``execInParallel`` to ``true`` to emulate those UEs connecting to the Core in parallel (rather than serially). You can also change variable -``defaultAs: "192.168.250.1"`` to specify the target of ICMP Echo +``defaultAs: "{{ ping_target }}"`` to specify the target of ICMP Echo Request packets sent by the emulated UEs. Selecting the IP address of a real-world server (e.g., ``8.8.8.8``) is a good test of end-to-end connectivity. Finally, you can change the amount of information gNBsim diff --git a/testing/integration_tests.rst b/testing/integration_tests.rst index 5935ce8..f761b59 100644 --- a/testing/integration_tests.rst +++ b/testing/integration_tests.rst @@ -16,10 +16,11 @@ for one of the :doc:`Aether Blueprints `. The pipelines are executed daily, with each pipeline parameterized to run in multiple jobs. The ``${AgentLabel}`` parameter selects the -Ubuntu release being tested (currently ``20.04`` and ``22.04``), -with all jobs running in AWS VMs (currently resourced as ``M7iFlex2xlarge``). -Pipelines that exercise two-server tests (e.g., ``ueransim.groovy``, ``upf.groovy``, -and ``gnbsim.groovy`` run in VMs that have the -`AWS CLI `__ installed; the CLI is is used to create -the second VM. All VMs have Ansible installed, as documented in the -:doc:`OnRamp Guide `. +Ubuntu release being tested (currently ``20.04`` and ``22.04``), with +all jobs running in AWS VMs (currently resourced as +``M7iFlex2xlarge``). Pipelines that exercise two-server tests (e.g., +``ueransim.groovy``, ``upf.groovy``, ``srsran.groovy``, and +``gnbsim.groovy``) run in VMs that have the `AWS CLI +`__ installed; the CLI is used to +create the second VM. All VMs have Ansible installed, as documented in +the :doc:`OnRamp Guide `.