Alison's New App is now available on iOS and Android! Download Now

Module 1: It Helps To Network

    Study Reminders
    Support

    Building Hybrid Clouds
    In the next topic you will learn how to build a hybrid cloud using GCP. Cloud VPN securely connects on on-premise network to a gcp vpc network through an IP set VPN tunnel. Traffic travelling between the two networks is encrypted by 1 VPN gateway then decrypted by the other VPN gateway, this protects data as it travels over the public internet and that’s why cloud VPN is useful for low-volume data connections. As a managed service cloud VPN provides an SLA of 99.9 per cent service availability and support site-to-site VPN. Cloud VPN only supports site-to-site IP site VPN connectivity it doesn’t support client to gateway scenarios in other words cloud VPN doesn’t support use cases where client computers need to dial into a VPN using cloud VPN software. Cloud VPN support both static routes and dynamic route to manage traffic between VM instances and existing infrastructure. Dynamic routes are configured with Cloud router which will cover briefly both version 1 and 2 are also supported. Cloud interconect provides two options for extending an on-premise network to a Google Cloud platform VPC network. Cloud interconnect dedicated also referred to as dedicated interconnect and cloud interconnect partner also referred to as partner interconnect choosing an interconnect types will depend on connection requirements such as the connection location and capacity. Dedicated interconnect provides direct physical activity between an organization’s on-premise network and the Google Cloud network edge allowing them to transfer large amount of data between networks which can be more cost active than purchasing additional bandwith over the public internet. if 10 gigabytes per second or 100 gigabits-per-second connections aren’t required partner interconnect provides a variety of capacity options also if an organisation cannot physically meet Google’s network requirements in a colocation facility they can use partner interconnect to connect to a variety of service providers to reach their VPC networks. Partner interconnect provides a service provider enabled connectivity between an on-premise network and the Google Cloud network edge allowing an organisation to extend its private network into its cloud network the service provider can provide solutions that minimise router requirements on the organisation premises to only supporting and ethernet interface to the cloud. Let’s compare the interconnect options just considered all of these options provide internal IP address access between resources in an on-premise network and a vpc network the main differences are the connection capacity and the requirements for using a service. The IP set VPN tunnels that  cloud VPN offers have a capacity of 1 and ½ to 3 gigabytes per second for a tunnel and require a VPN device on the on-premise network the one-and-a-half gigabytes per-second capacity applies to traffic that traverses the public internet and the three gigabyte per second capacity applies to traffic that is traversing a direct peering link configuring multiple tunnels allows you to scale this capacity, dedicated interconnect has a capacity of 10 gigabytes per second per link and requires you to have a connection in the Google supported colocation facility you can have up to eight links to achieve multiples of 10 gigabits-per-second but 10 gigabytes per second is the minimum capacity. Partner interconnect has a capacity of 50 megabytes  per second to 10 gigabits-per-second per connection and requirements depend on the service provider the recommendation is to start with VPN tunnels and depending on the proximity to a colocation facility and capacity requirements to switch to dedicated interconnect or partner interconnect when there is a need for Enterprise grade connections to gcp. Google allows an organisation to establish a direct peering connection between their business networks and ours with this connection they will be able to exchange internet traffic between their network and ours at one of the Google’s broad-reaching Edge network locations. Direct peering with Google is done by exchanging border gateway protocol routes between Google and the peering entity after a direct peering connection is in place they can use it to reach all the four services including the full suite of gcp products unlike dedicated interconnect direct peering doesn’t have an SLA in order to use direct peering they need to satisfy the peering requirements if an organisation requires access to Google public infrastructure and cannot satisfy our peering requirements they can connect through a carrier peering service provider it enables them to access Google applications such as g suite by using a service provider to obtain enterprise-grade network services that connect their infrastructure to Google. When connecting to Google through a service provider they can get connections with higher availability and lower latency using one or more links as with direct peering Google doesn't offer an SLA with carrier peering but the network service provider might. Let’s compare the peering options that you just considered both of these options provide public IP address access to all of our services the main differences are capacity and the requirements for using a service. Direct peering has a capacity of 10 gigabytes per second per link and requires you to have a connection in a gcp edge point of presence carrier peering capacity and requirements vary depending on the service provider that you work with.