PAPPAYA VIRTUAL NETWORKING SERVICE (PVNS)
Instances need connectivity. Besides getting the image from Pappaya Image Repository (PIR), a network for attaching the instance is required. Pappaya Virtual Networking Service (PVNS) is a Pappaya Cloud networking component and will create a virtual network and attach it to VM. Pappaya Virtual Networking Service (PVNS) enables Pappaya Virtual Networking Service (PVNS) connectivity as a service for other Pappaya Cloud services such as Pappaya Cloud compute and has a pluggable architecture that supports many popular networking vendors and technologies. It’s very extensible. Once the user authenticated and have an image and network available, Pappaya Hosting will step in.
Pappaya Virtual Networking Service (PVNS) connectivity as a service runs on Hypervisor. Pappaya Virtual Networking Service (PVNS) abstracts ports, networks, and subnets and makes those programmable with API’s help. It also has the plugin architecture, making it possible to integrate open-source and proprietary vendor technologies to provide additional services similar to abstracted.
Our PVNS has a modular architecture that can be deployed either in a centralized or distributed way, depending on the user’s needs. The service works by allowing users to create their isolated virtual networks and then attach interfaces to them. Those networks could remain isolated or could be connected to the rest of the world depending on requirements. Connectivity between internal networks is achieved by creating virtual routers to create routes between those virtual networks. Even a virtual router could be connected to a public network, and a floating-IP address could be allocated to the user instance to provide external access.
Pappaya Virtual Networking Service (PVNS) is responsible for putting all the networking related configuration in place. The user just needs to program Pappaya Virtual Networking Service (PVNS) via its API. Before Pappaya Virtual Networking Service (PVNS), Pappaya Hosting computes built a networking component was prominently used by design Pappaya Hosting networking has some drawbacks compared to Pappaya Virtual Networking Service (PVNS). Pappaya Hosting network gives a limited set of typologies to create one of the motivations for creating Pappaya Virtual Networking Service (PVNS) was to enable the ability to create a rich topology including a multitude of networks with many features.
With Pappaya Virtual Networking Service (PVNS), we can bring different technologies options than Pappaya Hosting with Pappaya Virtual Networking Service (PVNS). We can use VXLAN and GRE tunnels and are not limited to VLANs and flat networks only. Those tunneling mechanisms are essential constructs for SDN in the data center, and they enable us to use overlay networking on our physical legacy infrastructure.
Pappaya Virtual Networking Service (PVNS) has an extensible plugin architecture that doesn’t limit Pappayacloud contributor’s design and deployment choices.
Networks ports and the subnets are the three core resources that exist in all Pappaya Virtual Networking Service (PVNS) deployments regardless of the extensions you enable from their Pappaya Virtual Networking Service (PVNS) is responsible for taking ports the VM has its virtual interface. You plug that interface into the ports, and then now the VM has connectivity.
We talk with the API service from the outside world. It accepts requests and routes them to the appropriate plugin for action. Pappaya Virtual Networking Service (PVNS) plugins and agents perform various actions, including plugging and unplugging ports creating networks or subnet or IP-address. These plugins and agents differ depending on the vendor and technology used in the particular Pappayacloud infrastructure.
Pappaya Virtual Networking Service (PVNS) uses a messaging queue to route the information between the Pappaya Virtual Networking Service (PVNS) server and various agents and a database to store networking state for particular plugins. Your plugin at the bottom can be a monolithic core plugin or the ML2 plugin, which has mechanism drivers that can host multiple different technologies simultaneously.
Components of Pappaya Virtual Networking Service (PVNS)
Pappaya Virtual Networking Service (PVNS) is backed by relational DB for keeping the state across networks and components. Along with that, there is the message to the message queue is used for exchanging messages with the other Pappaya Virtual Networking Service (PVNS) agents. Typically in deployment, the user will find an L2 layer or two agents.
But in the real world, we have multiple copies of L2, L3, and DHCP agents. L2 agents will reside on Hypervisor on the compute node. There should be special agents for advanced services for doing load balancing doing firewalling, or as a VPN gateway.
Segmentation technologies supported
A local network is a network that can only be realized on a single computer node. It’s isolated from other networks or nodes. If your instances reside on the same compute node and are on the same local network, they can communicate otherwise. Not because of this limitation, this type of network is only used in the proof of concept or test environments because it doesn’t make much sense in real life.
A flat network is a network that doesn’t provide any segmentation to us. There is no 802.1Q VLAN tagging or other methods of segmentation to us.
It’s a single broadcast domain. Any instances attached to this network will see the broadcast traffic.
As a result of others attached, it doesn’t scale well and doesn’t provide sound isolation between whoever is on the network.
The network uses 802.1Q VLAN tagging for segmentation on the ethernet frame header. There is an option to tag the VLAN’s and most of them supported instances om the same VLAN will share the same broadcast domain, but unlike flat networks, you have multiple broadcast domains, not a single one.
It’s more scalable and secure.
When you create a new VLAN and network in Pappaya Virtual Networking Service (PVNS), it will be assigned a VLAN id from the range we have configured in your Pappaya Virtual Networking Service (PVNS) configuration.
If you want instances in different VLANs to talk to each other, we need to do inter VLAN routing with a layer3 router.
There is a concept of underlay and overlay technologies, which are software-defined and created to make East-West communication inside your dc or cloud manageable.
East-West communication we mean server to server communication VLAN based network is the underly networks and tied to the physical network infrastructure.
We could consider an overlay network as a network built on top of another network, which is called an underlay network peer to peer mesh tunnels created between the host in a network for the overlay. VXlan defines the MAC in the UDP encapsulation scheme where the original layer has a VXlan header and is then placed in the UDP network packet.
Pappaya Virtual Networking Service (PVNS) Feature and Functionality
Security is embedded in Pappaya Virtual Networking Service (PVNS), isolating different tenants’ instances from each other. It is a security risk, for example, for tenant A to have access to an example in Tennent by default. Pappaya Cloud was designed in this way out of the box.
We have a mechanism with security groups with Pappaya Virtual Networking Service (PVNS). A security group is considered a container for security group rules. It allows administrators and tenants to specify the type of the traffic and direction ingress or egress that path through a virtual interface port overlapping IP-address are supported. So, rules don’t conflict in a multi-tenant scenario, and it’s possible to filter IPV-6 traffic as well. It’s fully supported.
Security group rules are applied per virtual interfaces, so different security groups can be used to other virtual instances, even on the same VM. So, this provides flexibility for your deployment. It’s worth mentioning even when the VM’s are on the same Hypervisor. It’s still possible to filter traffic between those VM’s. When a virtual interface is created in Pappaya Cloud, it is associated with a security group. By default, all egress traffic is allowed, and all ingress traffic is dropped. Its important rules are stateful, i.e., if traffic is allowed in one direction, the response to that traffic in the other direction is automatically allowed.
NAT is a process of modifying source or destination addresses in the IP packet’s headers while the packet is in transit. Generally, the sender and receiver applications are not aware that the IP-address is being manipulated.
Routers often implement NAT, so we will refer to the host performing NAT router in Pappaya Cloud deployments. It is typically a Linux server with IP table software that implements the NAT functionality.
- Pappaya Virtual Networking Service (PVNS) component that’s responsible for that is the LAYER3 agent. There are multiple variations of NAT, which are three that are commonly found in the Pappaya Cloud.
- Source address network translation: Here, we modify the IP-address of the sender in IP-packets. Typically, we have some hosts using private IP-address but are defined within IETF RFC1918. These IP addresses are not publicly routable, meaning that a host on public internet cannot send an IP-packet to any of these addresses. If those hosts need to communicate with the outside world, they need a publicly routable IP-address with the l3 agents. We can change the private source IP addresses of those hosts to public route tables.
- Pappaya Virtual Networking Service (PVNS) maintains one to one mapping between private IP address and public IP addresses. It translated to it is possible for a host from the outside public domain to reach the internal VM by using a publicly routable IP-address. We call this public routable IP address the floating IP.
- Destination address network translation: The net router modifies the IP address of that destination and IP packet headers. Pappaya Cloud uses destination NAT route packets from instances to the Pappaya Cloud metadata services applications running inside the Pappayacloud metadata service applications running inside the instances access the Pappaya Cloud metadata services by making HTTP get requests.