Pass Microsoft Azure AI AI-102 Exam in First Attempt Easily
Latest Microsoft Azure AI AI-102 Practice Test Questions, Azure AI Exam Dumps
Accurate & Verified Answers As Experienced in the Actual Test!
Check our Last Week Results!
- Premium File 263 Questions & Answers
Last Update: Nov 11, 2024 - Training Course 74 Lectures
- Study Guide 741 Pages
Download Free Microsoft Azure AI AI-102 Exam Dumps, Azure AI Practice Test
File Name | Size | Downloads | |
---|---|---|---|
microsoft |
3 MB | 1005 | Download |
microsoft |
2.7 MB | 1114 | Download |
microsoft |
1.5 MB | 1122 | Download |
microsoft |
983.6 KB | 1195 | Download |
microsoft |
814.3 KB | 1282 | Download |
microsoft |
792 KB | 1347 | Download |
Free VCE files for Microsoft Azure AI AI-102 certification practice test questions and answers, exam dumps are uploaded by real users who have taken the exam recently. Download the latest AI-102 Designing and Implementing a Microsoft Azure AI Solution certification exam practice test questions and answers and sign up for free on Exam-Labs.
Microsoft Azure AI AI-102 Practice Test Questions, Microsoft Azure AI AI-102 Exam dumps
Plan and Manage an Azure Cognitive Services Solution
1. Overview of Cognitive Services
So let's get into it. The question that we're going to start off with is: What are Azure cognitive services? Well, first off, the word cognitive is a very interesting English word. If you go to the dictionary, cognitive basically means the conscious intellectual activity of the brain relating to sight, sound, and decision-making in particular. And so Azure Cognitive Services are a set of pretrained machine learning models that have been built and trained by Microsoft that you can use in your own programming. Literally, as we saw in a video in the last section, it's literally one line of code to call out to a Cognitive Services API and get a result. and so you can enhance your own applications using the various services. You do not need to build and manage your own model. So unlike Machine Learning Studio or automated machine learning and those types of applications within Azure, you don't have to train the model, collect the data, upload the data, analyse the results, and choose the winning algorithm. Microsoft Azure is taking care of all those steps, and they're just providing you the API that you can use in your production applications. The Cognitive Services API covers four major topic areas. So, while these are individual services, they are the broad categories of services. We have Vision Services, which includes all image computer vision analysis. We'll talk about that in a second. There are speech services, of course. The speech synthesis and speech recognition are in their language services, which are the natural language processing and related activities and decision-making services. Those are the four major categories of cognitive services. Now let's talk first about division services. The basic premise is that vision services analyse the content of images and videos. And so you have an image that you upload to Azure. You can either provide it as a stream or point to a URL of the image, and you will get back the computer's analysis of the contents of that image. For instance, it can recognise every object in the image, whether it's a shoe, a table, a chair, a bicycle, a phone, or a person. All of those types of objects can be recognized, and it will even identify where in the image these objects are. Basically, that is the computer vision service. You can also, in terms of the custom vision service, train the ML model with your own images. Like I said, you don't have to train this with machine learning training. But if you do have images and you do know the tags for the contents of those images, you can actually give Microsoft Azure an easier time recognising future images. And there are a lot of APIs centred on faces. So recognising what a face looks like—a human face—identifying people for who they are or matching them to other photos, recognising emotions—happy, sad, young, old—and even attempts at gender recognition for these faces So there's a lot of power in the facial APIs. The second major topic is speech services. So like I said, speech-to-text is called speech recognition. And so it's able to take natural human speech in several languages and translate it into text that can then be searched or used in other services in reverse. It can take a text and turn it into speech and that's called speech synthesis. Azure has various types of speech synthesis. Some of them are more obviously computer-generated speech, and some of them really do attempt to mimic natural human speech patterns, so you would have a harder time recognising them as being computer generated. Finally, there's a previous service. And while Azure exams typically do not cover previous services, recognising the speaker would be useful in a conference or meeting if you wanted to know that Bob said this, Sally said that, Joe said this, and so on. The third major section is language services. Now this falls into natural language understanding and chatbots. So language understanding services are basically listening to either human speech or taking in text input and being able to extract the intention of the person asking a question or making a statement such that the computer can then take an action based on what it thinks the person is trying to get the bot stuff. A Q&A maker will create a knowledge base bot that will ask a question and receive an answer in a chat bot. Analyst, we just saw in the last section an example of text analytics where you can detect sentiment, extract key phrases, et cetera, based on past text, and finally, just pure translation. Machine learning translation has improved greatly over the years, and whereas in the past it would do just word-to-word literal translation, modern machine learning-based translators are much better at detecting nuance and understanding the context of what you're seeing. There are currently 90 supported languages to translate text between, and the final section is Decision Services. A lot of times, it will be made up by content moderation. So you'll know whether something is a racy or adult image or gory image that can be detected by a moderator, and aside from human intervention, anomaly detection is also pretty cool, where it would recognise a pattern in data and then detect when the measurements fall outside the pattern. So we can see there are a lot of Azure cognitive services being offered. This course does cover the ones that are on the exam, so not all of these are covered by the exam. But yeah, we're going to get through this. In this first section we're talking in general about Azure Cognitive services, what they are, what they do, how to get started, etc.
2. Cognitive Services for a Vision Solution
Now, the first set of cognitive services we'll talk about are under the category of vision services. The big one here is the computer vision service, which can analyse the content of images, images, and video and return descriptions, tags, and locations, essentially providing an analysis of image content. Now, you can also train this machine learning algorithm using your own images. And so if you have specific needs that are related to your own applications, you can build a training set of data and come up with your own custom vision service. There's also a very interesting application for faces. Now, obviously, we live in a world where privacy and security are sensitive topics, and we want to be very careful when we're doing face detection and recognising people in images. But, for example, if you're cataloguing your own photographs and want to know where your aunt Mary appears, that's obviously a personal thing that you're not sharing and that you're not using to violate other people's rights. So you find something in the face detection service that's going to be able to recognise celebrities, or you can train it to recognise people that you know for your own purposes. Getting data out of forms is extremely important. Or invoices. You can just scan in all of the forms that are submitted, and then that can go into a data table, and the AI basically recognises the form, knows what the label is, and knows what the data can do. optical characterization to pull out the data, put that into a database, and that can be very useful. And of course, being able to search your video library and find all the relevant videos is going to be extremely powerful. Now, we're going to get into the details of these APIs later in this course. So there's a section of this course for each of these major topics of the exam. For instance, this is the Computer Vision API in action. You can see here that we fed it a photo of three people sitting and chatting in front of a computer on a sofa. And the Computer Vision API is able to detect—for instance, they've detected footwear, and we can see this person's shoe in the bottom left corner. It was detected that a person had detected a laptop, which is a type of computer. It discovered a seating, a second and third person, another type of seating, and a table. so you can see in this image. Now they have superimposed the diagrams—the rectangles here. But the computer vision service is able to accurately detect a lot of the relevant objects in this photo. And if you've got applications for that, you can certainly use that in your own applications. Now, there are a lot more features than just that, right? So being able to use this to detect objects is one thing. Now we can see that in terms of computer vision, it can detect text in these images. So if you've got street signs, numbers on the houses, licence plates, and things like that, it can basically extract that text. And you can use that, obviously. We just saw that detecting tags But it can also detect the text from the image—any kind of adult or racy content. It can recognise faces and even detect the ages and genders of those faces; again, train it to detect objects. And finally, something that's going to be useful in general is being able to block things out. It doesn't have to necessarily recognise this, but just the boundaries of things. So, for example, if you have a self-driving car, the car will be concerned with cars parked on the side of the road, pedestrians crossing the street, signs, cross traffic, and it needs to understand how things move in general. And so based on spatial analysis, I'm not sure you can develop your own self-driving car software based on this. But if you think about what a car needs to understand in order to propel itself down the street, this is the category of spatial analysis. So it doesn't necessarily need to recognise footwear, or it doesn't even really need to recognise that it's a person versus a bike. It just needs to recognise that it's a solid object and it's moving, and it should avoid it. Right? So that's the concept of spatial analysis. So that is the overall concept of the vision services we can see here. As I said later in this course relating to the exam, we do go into a lot more detail in terms of the code, in terms of what function calls and the API calls are, et cetera. That will be in the next section that's coming up. We're still talking in general about how these services fit within your own solution.
3. Cognitive Services for a Language Analysis Solution
So the second cognitive service that we concern ourselves with is called language analysis. Now, language analysis covers a couple of different things that are in some ways related and in some ways different. Obviously, pure translation between languages is one of those key features. So, if you want to translate some text from English into one of the 90 supported languages, you can use this service. Now, you might think, Oh, translation, that's pretty basic. Computers have been doing translation for 20 years or more. I think machine learning and AI translation have developed a lot in the last five years and are continuing to get better and better. Computers are really understanding the context and the sentiment behind words. They're understanding which words do not get translated, et cetera. So you're going to basically find that translation is just getting better and better. And by using machine learning, they're able to understand some of the nuances of language. Maybe one of the more interesting aspects is being able to understand human text and extract all of that key information. So if we look at the text analytics service, we have on the left here an example of a review. So somebody went to a steakhouse and left a lengthy review, and the computer here is going to try to understand whether it's a good review or a bad review and extract out the relevant pieces of information. So on the right, as you can see here, it took this text and determined with 100% confidence that it's English. It is removing the key phrases from the text. So, dinner party, steakhouse, name, sirloin, owner, kitchen Those are key phrases. It's overlooked a lot of the non-irrelevant bits. Okay, then it's determined whether it's a good review or a bad review. So it's an 86% positive review and a 14% negative review. It breaks this down sentence by sentence: 99%, 100% positive, all the way down till we get a neutral statement, and then we get a negative statement. The only complaint I have is that the food didn't come fast enough, and that was determined to be 100% a negative statement. Now, maybe as a human, you might say, "Well, the only complaint is that it's not as bad as it could be," but it's been determined this is a negative statement and the last statement was positive. Then it extracted New York City and the contoso steakhouse. There are dates involved, events involved, people involved, and phone numbers. It's extracted all of the relevant entities within this thing. Anything that's personal or identifying information And then it can create text that is basically linked. So if you want to turn regular text into link text, which is going to go to Wikipedia links, then that's something you can do as well. So this is the power of the AIis to basically completely understand what this person has said and you can then filter this under your good reviews and your bad reviews. And computers can handle this. So we saw translation and text analytics. There is some chat bot-related stuff. So obviously understanding natural human language and being able to converse the way a human would converse. So you'll notice when you go to a search engine like Bing, you can enter any search phrase, and Bing will do its best to deliver you relevant results. And it does that by trying to understand your intention, right? So there is some order to the million ways that you can say, "What is the best soda?" And it can pretty much deliver the same results no matter how you word it. not just based on keyword hits, but by understanding what it is that you want. So we're going to have a section of this course again related to the exam requirements. It's about halfway through the course when we start talking about natural language processing solutions. And we can see text analytics, speech translation, and Lewis, which is language understanding. You'll notice that the Immersive Reader stuff is not on the exam, and it's not going to be covered by this course.
4. Cognitive Services for a Decision Support Solution
So the third major category of cognitive services is called decision support. Now, interestingly, this is the only point in this exam where decision support is mentioned. So I will say this is a relatively minor part of the exam. If we go back over to this documentation, we can see decision support encompasses anomaly detection and content moderation. This preview feature is called Metrics Advisor and Personnelizer. Now. An anomaly detector is actually pretty interesting. Anomaly Detector is about detecting patterns in data and, basically, being able to predict the future based on what you see as a pattern. So, if you have a device that is being tracked and has this type of pattern, the anomaly detector can predict that pattern in the future. And then when you get these data points that fall outside of the prediction, which is this light blue area, then these are anomalies. And this is something you can create an alert for; you can send an SMS text; you can trigger some type of programme to run; et cetera. So the ability to be able to predict a pattern in this type of data, which is probably not easy to predict in a conventional manner, is kind of important, but it's again not covered by the exam. The other thing mentioned is content moderation. But in the case of this course and in the case of this exam, content moderation falls under the appropriate text or image service. So if we go back to the requirements of the exam, we will see that moderate content in images is part of computer vision. And if we scroll down to the video indexer part of this course, we can see that moderating content in video is also part of the exam. Unfortunately, those are parts of the exam. Tech moderation is not in it. And there is a tool that comes with Azure that you can then have humans go into. It can basically raise moderation alerts, and then you can have a human review it and approve or deny that tool exists. But it is not part of this exam as well. So, yeah, decision support was not given a lot of coverage on this exam. I'm just going to make you aware of it. And content moderation dealt with each of the individual sections.
5. Cognitive Services for a Speech Solution
So the last cognitive service to be discussed in this section of the course is the speech service. Now, if we look at the documents for that, we can see that it's centred around speech synthesis, which is text to speech, and the reverse, which is transcribing speech into text. It's also fair to say that speech services are often integrated with other services. So you could start with a speech-to-text service and then move on to a text analysis service to understand the components, or a natural language processing service to extract the meaning and intent. We can also do speech translation, which is the real-time translation of spoken English into another language, similar to what a United Nations translator might do. So those are the speech services. Now, again, as I said earlier, the speech services have gotten a lot better over the years. For instance, we can demonstrate this: I can start speaking naturally, and the Azure Speech to Text application will understand me fairly well. It also properly capitalises the start of a sentence and uses punctuation at the end. So there is a lot of potential here. And it's not just English that supports You can see many different languages, including several styles of Arabic, several styles of the Chinese language, many different accents of English, etc. The same is true for the text-to-speech service. It's really interesting to see all of the variety that they have in terms of the number of different voices, the languages that you have, and all of the different styles from formal to casual, et cetera. As you can see, text-to-speech has come a long way from the early days of computing. You can now choose either a clearly synthesised voice or a realistic-sounding voice. And you can even choose the style of speech from formal to friendly. Many different languages are also supported. Can you tell the difference between a Canadian accent and an American one?
Create a Cognitive Services resource
1. Cognitive Services API Overview
So what we're going to do in this section is create a cognitive services resource. We're not ready to talk about security yet unless we have a cognitive services resource to work with with. So we'll do that section first, switching over to the Azure Portal. If you do a search for cognitive in the Azure Marketplace, and here I am filtering on the Microsoft publisher, we can see a number of results. There are 22 results. Now, Cognitive Services, like I said, is an umbrella service that covers all of your Cognitive Services APIs. And so you're able to create a cognitive services service, and then from there, it will give you some keys and the URL, and you can use that to call all of those cognitive services APIs. and we'll do that in a second. But you'll notice that there are also separate services for the individual APIs. So, if you want to create a computer vision service, you can do so without first creating the umbrella, which is the cognitive services. The URL for this would be different as well as the keys, but it is limited to those computer vision APIs. In the Face service, you can even see a subset of the computer vision. And so there's even more nesting of these APIs. So effectively, you can get the Face service, which is a subset of computer vision, which is a subset of cognitive services. We can see that computer the custom vision is there as well. Lewis language understanding, Q&A maker, decision service, anomaly detector, speech services—they all have their own separate applications within Azure. And you can certainly go ahead and create yourself a Face API. But in this course, we're generally going to be working with a Cognitive Services API. And that gives us the full range of all of these services that we can use. So in the next video, we're going to click on Cognitive Services. Here we're going to go through the process of creating a cognitive services resource. But do understand that you have a choice of these different resources, and they do have limitations that the cognitive services do not.
2. Create a Cognitive Services Account
Alright, so let's go ahead and create ourselves our first Cognitive Services account. You can enter the term "cognitive" in Search and you will come to this type of result. Click on "Cognitive Services" and then "Create." Now you do need, with any Azure resource, a location to store this in terms of a resource group. And so I'm going to call this AZ COG. Services like that resource group themselves do not have to be uniquely named across all of Azure. It's simply for your account in that specific region. So this is the name of the group. The importance of this is that not every cognitive service is available in every region. So if you do have specific needs for specific services, you may want to go and see which regions are supported. and we'll look at that in an upcoming video. In terms of regional availability, I'm going to leave this in the Eastern United States. It's one of the most popular regions. We do have to give the service a name. Now this name is going to go into the fully qualified URL of the service, and so you're going to want to choose something that is unique across all of Azure. Now, pricing for cognitive services can get a little tricky. If we click on this link that says View full pricing details we can understand. So Cognitive Services is the umbrella API, and then there are all of these other APIs that are underneath it. If you just sign up for the Cognitive Services account, then you're basically signed up for that type of product. Either Computer Vision's one service, the content moderator's zero service, or the Face service Only the vision and language services are covered by this. Q and a maker Speech and custom vision have their own APIs, and they are not part of cognitive services. So we can look at the pricing on this as being fairly inexpensive. It's a dollar for 10,000 transactions, which means it's about one penny for ten transactions or one 10th of a penny per transaction. You can get volume discounts as you go up to a million, 10 million, or 100 million transactions per month. So pricing for these involves getting the tags from an image and getting the colour of the image with one group of APIs. The recognition of text and content moderation and celebrity recognition Same pricing but a separate counter, I guess. And as we go down this road, we can see the pricing is $1 per $1,000. Fairly standard. Going over to the language services will show different prices for different services. Now this is where it gets confusing because you can sign up for the Computer Vision service as a standalone service, and then you go down here and you realise there's a free tier of the Computer Vision service. So there's not a free tier of the Cognitive Service, but there's a free tier if you're using the Computer Vision Service. up to 20 transactions per month for free. Then you get the S one level of computer vision service, and the pricing is fairly much the same as the cognitive services. One dollar for $1,001.50 per $1,000. Spatial analysis is free during the preview. So we'll see that the pricing is a little bit different for each. If we go into speech services, there's also a free tier, and you get five audio hours per month in the standard service transcription—5 hours per month, et cetera. We can see the number of character-free minutes per month for different services. Then you're going to start paying for it, and it's a dollar per hour for speech to text, $4 per million for text to speech, et cetera. So, take a look at the various pricing options, and you may decide to use the individual APIs. If you're interested in the free tier for a development project or a testing project, it might be you can use these services for free. I've used the translator service because it does a couple of million characters of text from language to language for free in a month, and that's enough for my uses of that. So there is definitely a reason to use the individual APIs instead of the overall cognitive service, which does not have a free tier. Well, if you're using an Azure free account or a company account or something, you can certainly sign up for the S Zero tier of cognitive services. Now, notice that you do have to certify that there's no police-related related activities. So, for instance, with facial recognition, it can be very sensitive to use this in association with police-related activities. You're also agreeing that your data can then be used by Bing, which I'm not sure how great that is, but essentially, if you're going to train the Azure kind of services using your images, well, Bing can also benefit from that as well. tag is pretty standard. We won't create them for now. And if we just go to the review screen, then we're agreeing to get a standard non-free Azure Cognitive Services account, and we can then take it from there. So when we come back, this will all be created, and we can go into the account and look at how the keys work, networking, security, etc. Notice that we did not get asked to add this as part of any virtual network or any of the privacy and security settings. So we'll get into that when it comes up.
Microsoft Azure AI AI-102 Exam Dumps, Microsoft Azure AI AI-102 Practice Test Questions and Answers
Do you have questions about our AI-102 Designing and Implementing a Microsoft Azure AI Solution practice test questions and answers or any of our products? If you are not clear about our Microsoft Azure AI AI-102 exam practice test questions, you can read the FAQ below.
Purchase Microsoft Azure AI AI-102 Exam Training Products Individually