Pass Cisco CCIE Enterprise Certification Exams in First Attempt Easily
Latest Cisco CCIE Enterprise Certification Exam Dumps, Practice Test Questions
Accurate & Verified Answers As Experienced in the Actual Test!
- Premium File 576 Questions & Answers
Last Update: Dec 10, 2024 - Training Course 196 Lectures
- Study Guide 636 Pages
Check our Last Week Results!
Download Free Cisco CCIE Enterprise Practice Test, CCIE Enterprise Exam Dumps Questions
File Name | Size | Downloads | |
---|---|---|---|
cisco |
2.4 MB | 1956 | Download |
cisco |
9.3 MB | 3354 | Download |
cisco |
7.4 MB | 1448 | Download |
cisco |
2.4 MB | 1574 | Download |
cisco |
5.7 MB | 2172 | Download |
cisco |
4 MB | 1804 | Download |
cisco |
4 MB | 2629 | Download |
cisco |
1.9 MB | 2449 | Download |
cisco |
1.4 MB | 2233 | Download |
cisco |
1.1 MB | 2479 | Download |
cisco |
1 MB | 2224 | Download |
cisco |
369.9 KB | 2875 | Download |
Free VCE files for Cisco CCIE Enterprise certification practice test questions and answers are uploaded by real users who have taken the exam recently. Sign up today to download the latest Cisco CCIE Enterprise certification exam dumps.
Comments
Cisco CCIE Enterprise Certification Practice Test Questions, Cisco CCIE Enterprise Exam Dumps
Want to prepare by using Cisco CCIE Enterprise certification exam dumps. 100% actual Cisco CCIE Enterprise practice test questions and answers, study guide and training course from Exam-Labs provide a complete solution to pass. Cisco CCIE Enterprise exam dumps questions and answers in VCE Format make it convenient to experience the actual test before you take the real exam. Pass with Cisco CCIE Enterprise certification practice test questions and answers with Exam-Labs VCE files.
Architecture
6. Qis
In section one, item A, we need to discuss the quality of service as well. Now we are going to see the design aspect of this because we should first understand the core quality of service things, and then we should understand the quality of service things. Now the basic principle will not change. So how we do classification, marking, policing, shipping, scheduling, and so on will remain unchanged. So what are the things that are going to change? The way that we are going to implement it on the devices, those things are going to be changed in the SDWAN. So I have already uploaded a complete course for SDWAN. Then we can go figure out how we're going to use it. Is this QOS actually inside the SDWAN? Now, let's talk about, let's focus on: what are the fundamental components of quality service? So here you can see that we have three important things to do with classification and marketing.
You classify the traffic and mark it. We have policy is something like you have some sort of strict rule with the traffic. So if a certain limit or threshold is breached, it will start dropping the packet, and then we have the scheduling, including queuing and dropping, which means you want to give some sort of buffer in case you have congestion, meaning you don't want to directly drop the traffic, but you have some sort of buffer for that. Now, here on top, you can see that you have different types of traffic. I have voice class of traffic, then I have video class of traffic, teleprisions and data. Now, for all these types of traffic, they have different requirements. So some of the traffic is UDP-based, some of the traffic is TCP-based, some of the traffic that we have needs high latency, some can tolerate delay, et cetera, et cetera. So now you can see in the bottom that for voice traffic you have UDP priority, but latency should be less than 150 milliseconds. Otherwise, this voice will auto restore. There is no meaning for that. It means that whenever we are sending the voice packet, you think that your TTL is set at 150 milliseconds. If it is not delivered between those times, it will get lost, or something like that. You can see that jitter should be less than 30 milliseconds, and loss should be less than 1%. Now, why are we discussing this? Because the first fundamentals of QS are classification and marketing, So you can do the classification of the traffic, and you are understanding what the loss, latency, and jitter requirements are for different types of traffic, including what type of protocol they are supporting. So you can see that the voice, video, and telepresence they have are UDP, meaning they are very much real-time traffic.
But you want trustworthiness as well, you want security as well and that's why you have the business traffic or you have business data that is using TCP protocol. And that's why you can see on the bottom that you have a mission-critical application, maybe SAP, maybe any type of database, et cetera, et cetera. Now, the bottom line that you see here is actually important when you're creating the QS policy; in other words, when you're creating the policy, if you breach certain SLAs as specified at the bottom, that policy can make some decisions. so dynamically changing the route, et cetera, et cetera. And this particular feature has been optimised in the SDV solution. So here, it's very much coding based. You have to understand the protocol, meaning the device should understand the protocol. According to that you have to write the code and then it will take action, correct? So the first thing is that, to do the classification of traffic, there are several methods in QS that you can go and deploy it. Here, you can see one of the examples for that. So you're creating the policy map. Because this is high priority traffic, you have class voice. You're giving priority to voice; you're giving priority to video. Then you have the critical data; you are giving 15%. For that you're using some random detect DSV based. Again, there are different types of drop methodology. You are going to use red, random early detection, and you are going to use tail drop. So those are the methods you can use again, and you can see the data bandwidth percentage is 19%. This is yet another example of scavenger traffic. You are providing a lower bandwidth percentage (network critical), you are providing this three percentage, and so on. So the bottom line is that you need to write the code for all the devices inside your enterprise infrastructure structure, and then you have to configure them manually, one by one. Assume you have the same type of hardware in 1000 locations and you have to write this code, or at the very least copy and paste it to all of those thousand devices. Rather than doing this through the Sven, it appears that once the template is created, it can be pushed from the management plane to all devices, correct? And here you can see that how this queue looked like. So the important thing here is that you need to do the classification, and then you have to map out the queue.
As a result, classification or class forwarding to queue mapping That will be Step Number number one. On top, you can see this is for a low-latency queue or a class-based weighted fair queue where you are adding the voice videos, etc. For now, we have a nice example here that is clearly telling us that traffic is shipping. So without shipping, how does it look like once you apply the shipping? Once you apply the shipping, the shipper typically delays excess traffic, smoothing bursts and preventing the unnecessary drop. If it is polished, it will chop up; it will drop traffic directly. Again, we have another example. Suppose this is your configuration, and once you apply it, obviously you have one bandwidth pipe, and generally we are applying this policy to the van-facing interface. Assume you have a speed of 300 Mbps, correct? So you have a one-gig interface, but as per your agreement, you have 300 Mbps speed. Now, how you are going to use those classes and how you're going to electrode that 300 Mbps speed with priority, with some scheduling, and with some reading to the device, like you can see here in the example, And again, we are making this for a child. Child is called under inside the parent. Inside the parent, we have shipping; we have some buffer limits. So this is the way that we can go and apply the QS. Again, we have a nice example here: you have your enterprise devices, you have your MPLS devices, and you know what type of policy you want to apply. And these examples that you are seeing here are the standard ones that Cisco is recommending. If you want to learn more about us, you can visit the Cisco QS deployment guides, and there is a wealth of information available at the Cisco URL. But this is something like a standard format.
So all the companies have this type of existing network where they want to apply QS. Now you can say that you can't apply QS to the ISP side. So whatever QS you have on the ISP side, that's the same thing you want to match on your side as well. Let's say you go to the E and F configuration and look at the outbound policies. LLQ or class-based, class-based, rated, fair queue remarks, real-time traffic, class-based, then go to the bottom and check the service provided. So, what about the outbound? What exactly is inbound? So basically, my outbound will be the inbound for the service provider, correct? So we are going to manage this enterprise network, and then according to our keys, we can conserve the service provider and we can tell them, "Okay, you can map these DSCP policies from your side as well." As long as we have a common agreement and a common configuration, things will be better. This design strategy that you are seeing here, there Isa major change in the SDWAN now to reduce the cost for the SDWAN MPLS because you know that MPLScost is too much what customers are doing instead using100 Mbps MPLS, they are using one gig internet. So you will see even here and in upcoming sections: I will give you some slight design-related things or stuff that you can compare later on in your study as well, such as what "existing design" means and what "existing hybrid design" means, Internet plus MPLS. And then when we are moving to the next generation, which is a different type of network, what design possibilities do we have? And, once again, you can argue that, okay, MPLS has harmed internet, has this disadvantage, and so on.
But nowadays the big enterprise, they are using dual internet connection primary, second both they are using one gig or maybe two gig internet interfaces and then they are load balancing the traffic. All right? So if you have GRE or IPsec QS considerations on top, you can see that you have a normal packet without QS. Obviously, we know that in IP headers, we have to specify the type of service. Now, if you're using QS with GRE, you have your inner header, you have your GRE header, you have your outer header, and then you can see that this POS is properly getting from inside to outside. Likewise, if you have IP set incapacitation where we have inner header and outer header in case of IP seven mode. So you can see that we are retaining the QS market. All right, so these are examples related to QS. This is again very high level, but you should understand that, as per our curriculum exam, we should know these variations. You should know what we have in the existing and what we are going to have in the upcoming SDWAN QS. All right? So let's stop here.
7. LAN & WAN Design option
This session is quite interesting because we are going to discuss the van and the land architecture designs. Actually, we have started this course, and we are studying about design principles, what is existing, what is going to become, or what is the SDWAN or DNA, et cetera, as per the scope of the CCNP exam. Now, things are there, so I have one dedicated slide. I'll come after two to three minutes to this particular blank slide. Let me quickly walk you through two or three slides. So, we know that there is demand; there's drastic demand in networking, which means there are so many things that try to connect with the network. we have IoT, we have so many handheld devices. We are managing different types of mobility features or mobility devices. We are managing everything. Maybe we can integrate the cloud with our systems. We have different different applications. We want to provide visibility for the applications to manage all these devices. The complexity will be very high. If you directly go to the cloud, your credit score will increase. So you should provide security from the inside, the outside, and so on. So the bottom line is that you will have demand because you will have to integrate different types of devices inside your networking infrastructure, and then due to that complexity, demand will increase. Manageability will be the challenge. In that case, your security Or you may have the holes in the security. So everything should be integrated and linked. And what will be the solution? So, instead of using a monolithic architecture or the existing method of managing IT infrastructure, you can investigate the STN software-defined architecture. Now, what STN will provide, at least at this point in time, is that you can focus on application. So the true capability of STN is that we can now construct the policy-based infrastructure structure. Now, what does it mean? For that reason, I have this particular slide. So I'm going to draw one very nice distinction between all the technologies that we have—all the technology related to SDWAN. And this will give you nice idea about different types of complexity that we have in existing and where we are moving actually. So for example, we have land, we have Van and we know that we have data center as well. Now, for all these devices, if we're connecting to land, obviously you have land. If you have multiple ISPs, you can go and connect with the van, and then somewhere you may have your data center, right? This is something like your existing network, where you have branches and you're using existing methodologies to manage your IT infrastructure. Now, what is the difference when I am going to use a different approach to manage the land and data center? For land, we have something called DNA or spiders. We know that we have a Cisco Bay Area center and an ACI. Now we know at this point in time that all these infrastructures, all these components of the infrastructure, have a control plane, a management plane, a data plane, and a policy plane. Somewhere, you will find the orchestrator plane as well. So who is the controller for DNA? the management plane for DNA, the data plane, the policy plane, et cetera. So now this thing you have to study and you can compare different technologies. So the control plane is again a list; you will see that it is a protocol. who is managing the control or who is providing the control plan information. But for that you have border routers. For SDN, we have the OMP overlay management protocol. Who is managing this? But the controller is very smart. We know that we have a spine and leaf architecture, and we know that we have the coup protocol. Maybe you have an MP BGP protocol to manage the control information. Correct? What about management plans? For the management plane, we have a DNAC center that is nothing but the Epic Enterprise Module. This ACI. They are also using Epic. But this Epic and this Epic are 100% different. This is for your DNA, which at this point in time we can understand, and this is there for your data center manageability.
Then what about Esteban and SDWAN? You have a limit on the data plan. You are going to use VXLAN. When your data plan consists solely of IPsec tenants, Edge comes first. But you are using V-Edge or Edge devices. Cisco routers or Whip tiller routers? ACI, you have Leaf and its fine structure. So your Leaf will be your Nexus devices, and here in the spine, those are the Nexus devices that you can check the reference or data set for. Cisco is using IV, XLAN, and intelligent VXLAN because they have this one field for contract information as well as possibly policy information. Some extra bits are there in that VXLAN format. Then there's the policy plan; who makes the policy? Icing, we are integrating with DNA. So, Ice is the policy plan for SDWAN For Again, your little smart is the policy plan, and your epic is the policy plane. Because there you can go and build the policy, and then you're pushing it to different types of devices. This is the reference, and you can see it, and now you can think that, oh my god, my existing network is better than this stuff. So now you have to learn to understand DNA. And if you are doing your data centre study, you have to learn ACI and lots of programming as well. Okay, so this is the overview we have for all the upcoming technology. At least you will learn the estimate, the DNA, and the WLCS. I haven't shown you where these WLCs fit in the picture. I will check later on in the DNA section. Again, this is the DNA framework that you're constantly, constantly learning the network. And then you can take a decision. We'll discuss more and more about this framework in the upcoming session. But at least you can see that you can manage the entire network from the cloud management system.
Then you have full automation capability. You have analytic engine inside DNA. It's a big UCS box, as DNA boxes are. You have full visibility because these devices, these catalyst devices that we are using or the hardware that we are using for DNA, they are very smart, intelligent and they have programming chip inside that. And due to that, the telemetry feature will by default be there. So we have the full visualization for all those events, and things are happening with L 14. We can visualize the network. Then we have the full programming facilities, either in the physical or virtual infrastructure. Everything is integrated into the security system. We know we have the policy We can push the policy to the DNA devices from plane that, which is nothing but ice. So the principles are open programmable and API driven, and we can obviously do everything that we can do with the CLI or GUI with the API as well. Now, you can see in the diagram that you have the DNA center. That's your management plan. You have STX and personnel. DNA is the term, and Stix is the technology actually used. Then you have the hardware as a calculus nine k. And then you have the security features—you have encrypted traffic analytics. It's a big thing. Actually, DN is a big thing. Dan is also big because now whenever we have a study, we have to rethink our study or redo the study. And if you compare, or if you try to compare, what we have done in the existing network with what we are doing here in the DNA or SD one, So actually, there's not much scope for comparison. We should think that this is a new thing we are learning, and then we can start from scratch. All right, so let's just stop here. The next section is interesting. That is our core existing technology that I'm going to discuss in campus design, and other topics are also there.
8. Multilayer Campus Design Part 01
In this section, we are going to discuss the technology that we know, the multi-layer campus design, and obviously, in the next section, we'll discuss more about the upcoming technologies. So we know that we have our core layer, distribution layer, and access layer. Again, I have slides for all of these layers, so we can go over them, but everything is going and connected to the code, and then Core decides how and where to send the summarized routes, and so on. So here you can see the score is connected with the devices again at the distribution layer, and then you have the access layer where we have the van and where you have the Internet, meaning you may have a hybrid topology where you're connecting with the van Internet and you're connecting with your data center as well. Again, this is one of the campus designs, and maybe at this point in time 80% to 90% of campus designs are like this only because now we are slowly moving to the DNA.
The adoption rate at this point in time for DNA or first access is not that huge, but we are expecting that in the coming one to two years, or maybe two to three years, you'll see much more adoption for first access. Okay, so this is the code that you have, the code distribution and access layer. And again, if you split this diagram, you'll find that you should have a structure, modularity, and hierarchy; that you have the core access, core distribution, and access layer; and that you know where you're going to connect your MPL circuits, where you're going to connect your internet, where you're going to connect your IP, telephony services, et cetera. Okay, so let's quickly check that all these layers, what they are offering, and what exactly they are looking for, So you can see the diagram on the left and right here. You have the boxes and know what they look like. So you have the code distribution, distribution and then you have access like that. You can think we have dedicated slides for all these layers, so it's better we check all these slides one by one. So what's the use of the access layer? What's the access layer doing? You have L two, l three features. You can use High Ability, Security, QS, IP, Multicast, and so on. You can go and use QS. You have a IGMP trust boundary. You have broadcast or multicast boundaries as well. If you want, you can suppress those requests. You can go and use various types of optimization methods for loop prevention. And if you are using L2 or L3 methodologies, you can use a routing protocol like ERG, POS, et cetera. If you are doing high availability and redundancy, we can use PAGP, but very few guys are using Cisco's proprietary port aggregation group protocol. Generally, we are using the Laser Link aggregation control protocol. if you are using UDL or Flex Link, et cetera. Okay, so this is something where you have your trust boundary, where you want to protect your network. Obviously, you want to protect your network in all places, but this is the place where the endpoint will come and connect. So at this particular label, you want higher security, you want segmentation, you want Maxi, BPD, filters, filters, cards, et cetera. Okay, now next we have the distribution layer. distribution layer is there. This is the second type of backbone, or second level of backbone, we have, where you want to apply some more mature policies and high availability features. So that's why we have high availability load balancing in QS. We can do the aggregation of all the access at the level of distribution, meaning you have a smaller number of devices in the distribution layer compared to the access layer.
Even in the access layer, you can use two K series or three K series switches. In the distribution layer, you can use four K or maybe six K. Now, Cisco is requesting everyone to use the 9K model of switch. So you can choose the recommended or supported models. In terms of hardware capability, they can do route summarization, fast convergence, and Hsrpglvp, which we will have a separate section for in upcoming sessions. So we'll discuss more in the upcoming session. About that "core" layer, obviously, that is the core. It's the backbone. You want high availability and scalability for the core layer. They are the aggregator point for all the routes. And again, if you go and refer to this diagram, this particular diagram, you can understand that things are converging towards Coral Air, and then from Coral Air you can distribute those routes to the other locations as well. Again, here you can see in this diagram what it looks like. You have layer-two access. You may have routed access, you may have virtual switching, et cetera. Then you have the distribution level, which is everything coming to the core and then going to one of the branches or the data centre or other branches. So this is the network that we are very much familiar. And again, with respect to design, you'll find that there are variations that we need to consider. Obviously, the hardware for all of the devices is evolving. There is evolution, there is upgrading forte software, also for all these devices. So how much virtualization these devices are supporting, how much hardware? Again, that depends on how much throughput I, as a customer or a user, want from this infrastructure. So according to that, the hardware can be implemented. And it's recommended that even if you have this network and you want to upgrade the switches in it, you use Cisco 9K-series switches. So, in the future, if you want to migrate to DNA, you'll have some sort of DNA-supporting switch. That's a strategy. Now that is all the companies are using, so they are not ordering the legacy hardware rather than the new hardware. So in the future, if they want to migrate, they can use that strategy.
Again, as per the design perspective. Here you can see that you may have layer three distribution interconnection or you may have layer two distribution interconnection, meaning you want to stretch the boundary up to the distribution layer. So this red-colored wire that you are seeing here is L 3. And this blue color connection that we are seeing is in the L2 domain. And that's why I what you want, you want layer to access. So, obviously, in the bottom, in the access for which you have the VLAN, you want to intervene, and in the routing for which you have the SDP by default, you have the HSRP or Glop, you want some ether channel, you have some Bpd, and so on. So for all the types of prevention you have with some companies, they are using layer two boundaries up to the distribution layer. So here, you can see the difference. Let me quickly show you the difference. So you have layer three, this red-colored link in between the distribution switches. They are using a routing protocol to forward the traffic. And, in fact, if you look at any of the upcoming networks or STM solutions, you'll notice that we're using routing even for layer two carriers. So what does it mean? For example, if you go and check the Cisco ACI application-centric infrastructure where you have Leaf and it's fine, they are connected behind the scenes, but they are using a layer-two type of routing strategy, meaning Mac addresses can also route, so you can check that it's true that the routing is better than what we are doing in terms of switching for high availability or scalability reasons. And that is again one of your design aspects. Assume if you have two different fabrics, a stretch fabric, you don't want to extend the L-2, although you can extend the L-2, but routing is always preferred on top of the L-2 TL. So for example, the L two is not preferred over L three routing tunnel. All right, in short, routing is preferred rather than stretching the L two. This is also supported model. This is not because we are discussing the L-2 supported model, which is an oil supported model. This is something that, for example, provides the capability of a VSS virtual switching system. As a result, there are parallel network enhancements. Suppose you have switches.
If we connect, obviously you know that as per their priorities and packages, they will choose who is the root switch, who is the second switch, etcetera, et cetera, all private and everything. In this case, we can say that you don't want the strangles in a network; instead, you want a straight line. So you collapse these two switches inside one unit, and then the bottom switches can also think. So this diagram will be like this, or maybe this diagram will be like two switches. They are connected back to back. Physically they can be like this, but logically they will be like this. That's the overall idea if you're using this type of technology. And then, that's the evolution you have. If you see the data center evolution, you'll find that it started with 6500, then moved to Nexus Seven K, Nexus Nine K, and now Nexus Nine KCI. So we started with a standard network, then moved to, say, VSS, then VPC, then Leaf and Spine, and finally left to spine. SEI Okay, so evolution depends on which network, type of standard network, or design you support. You will find this type of network as well. You'll find this type of network as well. Both are supported by the existing model we have at this point in time. So you can have these devices in the same unit as the distribution devices. So you have, for example, one control channel and two data channels. And these devices, such as multiphasic, connect, and so on, function as MEC. for since you're eliminating the L-two loop rather than having a straight line in the network. Alright, so this session is actually becoming longer. Let me stop there, and next session will continue from here.
9. Multilayer Campus Design Part 02
Let us continue the previous section, and we have seen that we have options. We can use L3 as a distribution link, or we can use L2 as a distribution link. In between, if you are using L2-capability, we have the option of using a virtual switching system where we want to reduce the number of Franks in a network. Now you can see the convergence time in relation to Rapist or Probst, as well as the routing protocols OSPF and EIGRP. And that's the thing that we are considering: when we are using the routing protocols, their convergence is faster because they are not using the blocking interfaces, like if you're reusing the L-2 interface, then STP will run automatically and 50% of the link will be unutilized. So those things are not there. As a result, this routing design will always be preferred over the switching or editor designs that we will discuss in the following session. But if you see this diagram, let me try to explain a few of the things. So here, you can see that you have a border router. So for example, with BRBR, you have edge devices; maybe they can do switching and routing both. So you have the Edge devices as well. Now, with the use of these edge devices and this APS that you are seeing here, they are nearing the end point. So now that these routers are these devices, they are not learning the entire routing table or the entire topology table, not everything, but they are knowing what is connected to them, and then they are sending that information to MSMR.
So we have this resolver, and we have the service to which they are pointing, or they are sending all information to the global routing table. Like that, you can think you have one DNS, with that DNS having the information from where I am to where I have to be. So those all information that DNA has or having. Then you have the access point that you can see, and then you have the WLC where all the access point control information or all the access points are managed via the WLC. We are going to discuss this also. So we have the underlie architecture infra, we have the overlay architecture and infra and this underlay and overlay in conjunction they are working for SD access. Here you can see that the IT Challenge new services include mobility services, scalability, flexibility, and programming infrastructure. Those are the things that you'll find inside DNA or St access. We have the controller, we have the control plane, and this thing we have discussed, the policy plan, which is the Ice identity services with the help of the control plane and the policy plane, or with the help of the DNAC plus the policy plane, we're going to manage the DNA fabric or DNA center. So you have the endpoints, and their current thinking is that they have offloaded the load that we have in the existing campus area network or enterprise campus area network. So these are the things that are going to be new things, or these are the new innovations we have from the existing campus area network. So, if you see this existing network, you can go back to this MPLS-V Technology Design Guide and get a nice document from Cisco.
Now if I go here and highlight you certain things. So here, you can see that you have your core, then you have your campus area network, where you have the distribution switch. You may have one building with one switch, other building with one switch where you have connected the access point. Access point. We can see you may have one floor where you have a much larger number of users. So then you have the endpoint IP phones, you have wireless access points, and you have roaming users, et cetera. So you have the access layer endpoints. Maybe you have Iota devices in the existing network as well. Then you have the reservation, and then you have the core switch. Now here you can see that your course, of which one portion is going towards the van, And maybe this is my data center or main location. One portion is just showing that it is going to a data center. So you have the Nexus switches where the core switches are connected, and then you have the Nexus five K.Then you have MDS and your storage network and your application servers, et cetera. So you have your data center. You can see here that you may have C to P. Suppose these are your CE customers. You have PE here, and somewhere in the background you have the MP's background that has the P devices, PE, and P devices. So here also you can see that your core switch is connected with the core router and then it is going to be terminated to the van. You can see that you have the VPN endpoints for the remote users. Or suppose if you are using Die or maybe DCA direct Internet access or direct cloud access, then your traffic is going via the firewall to the internet, where you have an IPsec tunnel for the remote users or maybe those users who are connecting with IPsec.
So either Teleworker’s mobile worker or the remote user So maybe a few of the branches they are using—the internet and the van—mean they are using the hybrid type of branch or they are the hybrid type of branch. Or maybe we have a dedicated MPLS type of branch. So that's the reason everything is shown in this particular diagram. This is one of the validated campus area networks in this particular curriculum. You will not schedule this data center portion, but you should know each and everything that is shown here in this diagram with the new format of Storming, and that's the overall goal with the CCNP enterprise network, that you should know the DNA and SDWAN, but you should know the core as well. Again, the same thing here you can see that how the core distribution and then again you’re going towards dual MPs or dual carrier. Maybe you're going towards Layer 2 (Van). Maybe you have some internet, and you're creating an IPC tunnel, and then you're going to the internet. Okay? So all the summary routes are coming and being collected at the van distribution layer, and then it is going to the core. Now, this slide is also very important, and this is equally important for SDWAN as well. So whenever you're doing the migration or the new SDWAN deployment, at that time you have to mark the site. So you may have sites, for example, Gold or Silver, and you can market them; it's just the name and maybe a branch. Now, "gold" means that you are using, for example, dual devices and a dual ISP that is your core, this type, or maybe this type, that is your high availability network. It may be that you have only one device, but you have two Ice plans: either two MPLS or maybe one MPLS, and one or two Internet plans, correct? And bronze means that you have one device and one connection. One device, one connection.
So as per the type of branch, you can categories the branches or branch type, and then you can create the templates. And then you have to push the template so you can create the configuration, and then you have to manage them. So for that, you should understand what type of remote branches will be there. Now, once you identify your remote branch, that’s one part of this course. The other thing that you should know is what is behind that. Correct. So once you know the land part, you should know the land part, and then that will become the enterprise network. Correct? So for that reason, you may have multiple options, but I'm going to show you a few of the examples. Maybe you have shared land; maybe you have a L-3 connection with the switches that are behind these edge devices. Or maybe you are using some sortof highly available and redundant design. Or maybe it's a channel multiple-choice stack; actually, this is the recommended way. And suppose if you ran into your internet, obviously you have to put the firewall as well in between that. Correct. So these are the design options. We have the last recording, and this recording is actually very important. These sessions, slides, and recordings are the funding elements of how we will design what is in the existing DNA, what will come in the DNA, and estimate.
10. 1.1.b High availability techniques such as redundancy, FHRP, and SSO
Within one, we must comprehend or discuss high availability techniques such as FHRP and SSL. So let's discuss each one by one. P is nothing, but first off, can you explain the redundancy protocol and how it is working, and what are the flavors you have? You can see that we have HSRP, which is the hottest standby redundancy protocol. Then we have VRP, which is a virtual routing redundancy protocol, and then we have the Grateful load balancing protocol. Now, VRP is the industry standard. This HSRP and Glop are owned by Cisco. Mostly, we are using VRRP. The VRP and HSRP are very similar. But this GLVP is different, and we can discuss how different this gateway load balancing protocol is—basically, what is happening in all these gateways. First of all, redundancy protocols or gateway load balancing protocols suppose that you have two routers or maybe switches, and these switches have a common interface going towards the land. And suppose you have one PC here. Now this PC can have a gateway here or here. But the common term here is "technology," meaning that we are not giving gateways, for example, one or two for this particular IP, which is ten. Rather, we are giving the gateway to some virtual IP, and that is attached to this device and this physical device.
So what is the benefit? The benefit is that for example, this particular device or router, for example, this is out of A, this is out of B. He's working as an active member, and B is working as a standby. So that means that packages forwarding in this direction. And suppose the link going towards the van or the device going outward is down, the packet gateway will be down from this direction automatically. Okay? So that's the term, and that's mechanism—some sort of high-ability mechanism. We can say that if one of the devices is down, or maybe if the link that you are tracking is down, you can go in another direction.
Again, if this device comes back online and you have some prints set, traffic will flow in this direction. So that's the difference between VRP and HSRP GLVP. If you look at these terms or technology, you'll notice that one of the members is always active and the other is working as a standby for 50% of the resource utilization we're doing. The answer is not exactly what we can do. For example, for certain VLAN 1020, such as 30 and 40, if I have these four VLANs for VLAN 10 and 20 as per high priority here, higher is better. You can set priority 200 towards router number A. Suppose this is router A, and this is router number B. This A is for active, and this B is for standby. A and B are the names of the routers, so for ten and 20, he is the active member, and for examples 30 and 40, this B is the active member. So that means that they are using some sort of active mechanism to send the traffic, and obviously you have different virtual IPs for all these different subnets. This way, in the case of HSRP and VRP, you can achieve an active type of load balancing. GLBP is a bit different in that it does the load balancing by default.
So let me show you this in the diagram. You can see that when you have the distribution layer here, you have the first operating protocol. You can go and use the first operating protocol they used to provide a resilient default gateway These are the technologies that are providing milliseconds of convergence time. If one of the active members fails, it can go to the standby or the backup BRP if you need multi-vendor intraoperative BRP is nothing more than the industry standard, GLVP, that fascinates uplink load balancing, and the name itself is get will load balancing protocol. The print is there to avoid the back holding of the traffic sign, which means Print is something that becomes the primary member of the active member if your device goes down and then comes back up. Yes, you can now see how they function. So these devices have their IP, they have their Mac, then they have their virtual IP and their virtual Mac. Now this virtual IP is shared, so you can think of it as if you have some ghost router in between that is glued to these two devices. So when the traffic is coming, you can see it, and if he is active, the traffic will forward it in this direction. Okay? So traffic will come; it's going in this direction because he's the active guy as per the priority, and here you can see the example for all the IP, the Mac, the gateway gateways, the virtual IP, and the arc entry.
This is the strand formed by our active and backup. if this will go down. Obviously, it will be rerouted in the other direction that we have discussed; that's the mechanism that's 100% the same with HSRP and VRP, but if you go and check the VRP methodology used in SDWAN, you can go and reference that. You'll find that there is a customized VRP inside the ST van where we are tracking the OMP. We are tracking the prefixes, so track OMP track prefix rather than here. If you want to track the interface, you have this option that you can use and track the interfaces as well. So what I'm saying is that suppose you have a van and you want to track this interface. You can track it. If this goes down with a priority of 200 and a priority of 150, you will automatically track the interface and decrement the priority. So, while the link is down, this priority will change to 200 -- 100 -- 150, and traffic will flow in this direction. So that's the tracking option we have, and again, the primary option is that you can print it, so if the device goes down, the traffic will go in this direction, and when it comes back up, it will go from the primary member. The same thing we have in the VRP from the active members, so I'm not going to discuss it here because the VRP is open to all vendors. It supports multiple vendors; the rest of the features are nearly identical. Now next, we have the Glop. So let's quickly discuss about Glvp.Glvp is doing some sort of gateway load balancing, so here you can see this IP, Mac, virtual IP, and virtual Mac. Now you have a virtual Mac here, and here you have two different virtual Macs. So what will happen is that some traffic will go in this direction and some traffic will go in this direction. That's why you can see that both directions have the traffic flow, although I haven't seen much of the customer base using this methodology due to some internal load balancing issues related to Mac addresses, but yeah. We have the option of using the Glvpas well so we don't have to group the VLAN that we discussed earlier, but they will do the load balancing at their own level so they can do their Mac address base load balancing, and here you can see that some traffic is going via B and some traffic is going via A. One is the active virtual gateway, and one is the active virtual forwarder like that.Okay, so with this methodology that they are using for map-based load balancing, you can create some sort of VLAN-based load balancing where some traffic will go via the gateway and some traffic will go by the gateways, and as a whole unit, they are working as an active load balancer. Alright so let's just stop here.
11. 1.2 Analyze design principles of a WLAN deployment
Let us continue the previous section and discuss about the other high availability features that we have. So we have SSO, that is the straightforward switch over and how it is helping to provide high availability in the campus architecture design that we are going to discuss. Here you can see that we have two different types of network design. We have layer two, layer three topology, and multi-tier layer three topology, and you can see that these blue interfaces or these blue lines are layer two interfaces and then this red one, these red connectivity lines, are layer three interfaces. Here in this diagram you have L two plus L three link. Here in this diagram, you have every single L-3 link. So, whether it's a two-L three topology or a full L three topology, we have the option of doing SSO; we can do a simple switch over. Now what does it mean, and why is this important? You can see here that you have two use cases for this. Either you have a big catalyst chassis box where you have active and standby supervisor engines or you have a stack-off switch.
Now suppose if you want to upgrade the image but you don’t want to take the maintenance window so you don't want to tear down or you don't want to stop the production traffic and then you want to upgrade the image. That's one of the use cases we have with the SSO. In that case, we can go and achieve this target with help from SSO. Now, when we are using SSO, there is one problem: suppose you are doing an iOS upgrade and you have this SSO feature enabled, you have an active supervisor engine and a standby supervisor engine. The problem here is that if you do switch over, that will not be tasteful. Now to do this state's full switch over You should have a common database between the active and standby soups. Here in the diagram, you can see clearly that they are active, and on the standby supervisor engine, they haven't synced their routing database. So particularly with respect to the routing database, they are not in the state of full switchover until you go there and enable one of the features that is nothing but NFS nonstop forwarding. So, if you go to these protocols like OSPA, BGP, and so on, you can see that.
And you have to enable the feature called nonstop forwarding NSF. And for that, you should go and check the dataset to see which particular version of hardware or model is supporting NSF non-stop forwarding, which is not supported. Okay, apart from that, it's okay that we have this SSO. We can go and use SSO with NSF, and then they will do the street-level full switch over. That’s the one very important point we have. Another consideration is if you are using Issue’s in-service software upgrade. Now, in that case, you should also know that. Which particular supervisor engine, which particular image? What is the hardware platform that you want to upgrade? Suppose if you want to do the major upgrades, for example, from 60 to 70, it is recommended from Cisco, TAC, or Cisco that you do not do an in-service upgrade or an in-line upgrade, rather than take a complete maintenance window, take complete downtime, and then do the major upgrade. These SSOs and ISSUs are now such quick upgrades. Whatever we have for these upgrades, we can go and use this SSO, and it is quite handy. Okay, so that's the feature we have with the SSO. All right, so let's stop here.
So when looking for preparing, you need Cisco CCIE Enterprise certification exam dumps, practice test questions and answers, study guide and complete training course to study. Open in Avanset VCE Player & study in real exam environment. However, Cisco CCIE Enterprise exam practice test questions in VCE format are updated and checked by experts so that you can download Cisco CCIE Enterprise certification exam dumps in VCE format.
Cisco CCIE Enterprise Certification Exam Dumps, Cisco CCIE Enterprise Certification Practice Test Questions and Answers
Do you have questions about our Cisco CCIE Enterprise certification practice test questions and answers or any of our products? If you are not clear about our Cisco CCIE Enterprise certification exam dumps, you can read the FAQ below.
Purchase Cisco CCIE Enterprise Certification Training Products Individually
Tabitha_DJ
Oct 26, 2024, 11:09 AM
I took the lab exam two days ago, and it was quite extensive. The dumps really helped me a lot, especially with the lectures I bought. It was a surprise, but many questions in my test was from the practice questions. For example, there were a lot of items about the configuration of VRF/GRE/EEM and CPP. So, if you’re taking the lab exam, be prepared and do lots of practice. It helped me drastically during my preparation process.
Bethani
Oct 8, 2024, 11:09 AM
I just lost my job because of the pandemic, and now I need to get a certification to have a better chance. I’m thinking about going for the CCIE Enterprise certification exams, because I have some of the required skills. Therefore, I decided to visit this website just like my colleague recommended me. They said that Exam-Labs has genuine prep materials and always update their content. I went for the premium bundle, because it has various tools in one package. I’m hoping to get a high score, so this should help me.
Hilda_Timm
Sep 11, 2024, 11:08 AM
I passed the ENCOR exam on Wednesday, and it was really hard. I am really grateful for the work Exam-Labs does, because the dumps I used helped me a lot. I saw many similar questions in my exam, and that’s how I was able to answer most of the questions. There were also some new items that I didn’t know how to answer, but I got the passing score, and I’m glad about it.