0 of 60 Questions completed
Questions:
You have already completed the quiz before. Hence you can not start it again.
You must sign in or sign up to start the quiz.
You must first complete the following:
Quiz complete. Results are being recorded.
0 of 60 Questions answered correctly
Your time:
Time has elapsed
You have reached 0 of 0 point(s), (0 )
Earned Point(s): 0 of 0 , (0 )
0 Essay(s) Pending (Possible Point(s): 0 )
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Current
Review
Answered
Correct
Incorrect
Question 1 of 60
1 point(s)
You want to make use of Azure Stream Analytics. The Stream Analytics instance will be receiving data from IoT-enabled devices. You need to send the data onto Cosmos DB.
Which of the following would you need to set in Azure Stream Analytics?
Question 2 of 60
1 point(s)
You want to make use of Azure Stream Analytics. The Stream Analytics instance will be receiving data from IoT enabled devices. You need to send the data to Cosmos DB.
Which of the following needs to be created in Cosmos DB beforehand?
Question 3 of 60
1 point(s)
A company wants to design a data processing system. Data would be ingested via Kafka streams into Azure Data Lake Storage. The data needs to be processed by an Apache Spark-based analytics service.
The company decides to use Azure SQL Data Warehouse as the analytics service.
Would this fulfill the requirement?
Question 4 of 60
1 point(s)
A company wants to design a data processing system. Data would be ingested via Kafka streams into Azure Data Lake Storage. The data needs to be processed by an Apache Spark-based analytics service.
The company decides to use Azure Databricks as the analytics service.
Would this fulfill the requirement?
Question 5 of 60
1 point(s)
A company wants to design a data processing system. Data would be ingested via Kafka streams into Azure Data Lake Storage. The data needs to be processed by an Apache Spark-based analytics service.
The company decides to use Azure Stream Analytics as the analytics service.
Would this fulfill the requirement?
Question 6 of 60
1 point(s)
A company wants to design a data processing system. Data would be ingested via Kafka streams into Azure Data Lake Storage. The data needs to be processed by an Apache Spark-based analytics service.
The company decides to use the Azure Analysis service as the analytics service.
Would this fulfill the requirement?
Question 7 of 60
1 point(s)
A company wants to deploy a Cosmos DB account. The data within the account will be used by data engineers situated across the world. You need to ensure that data engineers worldwide can access the data for a read operation with the least amount of latency. You also need to ensure that costs are minimized. Which of the following would you implement for this requirement?
Question 8 of 60
1 point(s)
You have an Azure Data Lake Storage Gen 2 account. You have to grant permissions to a specific application for a limited time period. Which of the following can you use for this requirement?
Question 9 of 60
1 point(s)
A company is planning to set an Azure SQL Database. The database contains tables and columns that contain sensitive data. The company wants to have a solution in place that would accomplish the following requirements.
• Ensure the database is encrypted at rest.
• Ensure that when sensitive data is accessed from the columns, it is encrypted in transit.
Which of the following would you use for the following requirement?
“Ensure that when sensitive data is accessed from the columns, it is encrypted in transit”
Question 10 of 60
1 point(s)
You have an Azure SQL database. You need to provide an Azure AD group read access to the database. Which of the following would you use to provide access?
Question 11 of 60
1 point(s)
You need to design a solution that would use Azure Functions. The function would be used to process data that is uploaded to Azure Blob storage. You have to ensure that the following requirements are met.
• The solution must have support for 1 million blobs
• The solution must scale automatically
• Costs must be minimized
Which of the following would you recommend for this requirement?
Question 12 of 60
1 point(s)
A company wants to design a solution that would support the ingestion and analysis of log files in real-time. Which of the following would you implement for this requirement? Choose 2 answers from the options given below.
Question 13 of 60
1 point(s)
A company is planning to design a solution in Azure. The solution would be based on the Kappa architecture as shown below.
Which of the following could be used for Layer 2?
Question 14 of 60
1 point(s)
A company wants to make use of an Azure Databricks interactive cluster. The cluster would be configured for auto-termination. The company wants to ensure that the cluster configuration remains indefinitely after the cluster is terminated. The company also wants to ensure that costs are minimized when implementing the solution. Which of the following would you implement for this requirement?
Question 15 of 60
1 point(s)
A company wants to use an Azure Data Lake Storage account to store CSV files. These files will be organized into department wise folders. The company wants to ensure that data is configured in such a way that users will only see files in their respective department folders.
They decide to disable hierarchical namespace and use access control lists.
Would this fulfill the requirement?
Question 16 of 60
1 point(s)
A company wants to use an Azure Data Lake Storage account to store CSV files. These files will be organized into department wise folders. The company wants to ensure that data is configured in such a way that users will only see files in their respective department folders.
They decide to enable hierarchical namespace and use RBAC.
Would this fulfill the requirement?
Question 17 of 60
1 point(s)
A company wants to use an Azure Data Lake Storage account to store CSV files. These files will be organized into department wise folders. The company wants to ensure that data is configured in such a way that users will only see files in their respective department folders.
They decide to disable the hierarchical namespace and use RBAC.
Would this fulfill the requirement?
Question 18 of 60
1 point(s)
A company wants to implement a big data store. Below are the key requirements for the data store.
• It should have support for a hierarchical file system
• It should be optimized for parallel analytic workloads.
• It should provide unlimited account sizes.
Which of the following would you implement for this requirement?
Question 19 of 60
1 point(s)
A company wants to implement an Azure Cosmos DB database that would support data storage for vertices and edges. Which of the following would you use as the underlying Cosmos DB API?
Question 20 of 60
1 point(s)
A company wants to implement a data store that would meet the following requirements.
• Be able to receive thousands of files per minute
• The files would be in different file formats – JSON, text, and CSV
• The files would eventually be processed, transformed, and loaded into an Azure SQL data warehouse
Which of the following would you use as the underlying data store?
Question 21 of 60
1 point(s)
A company wants to migrate data from an on-premise Mongo DB Instance to Azure Cosmos DB – Mongo API. During the testing phase, they discovered that too much time is being taken for the migration process. Which of the following can they implement to reduce the migration time? Choose 2 answers from the options given below.
Question 22 of 60
1 point(s)
A company wants to deploy a set of databases using the Azure SQL database service. They want to organize the databases into separate groups based on database usage. They also want to have the ability to define the maximum limit on the resources that would be able for each group. Which of the following could be recommended to fulfill this requirement?
Question 23 of 60
1 point(s)
A company wants to create an Azure storage account. Below are the requirements for the objects in the storage account.
• Storage costs should be minimized
• The storage account will be used to hold objects which are infrequently accessed
• The data in the storage account will be stored for at least 30 days
• Data availability must be guaranteed at an SLA of 99%
Which of the following could be used as the underlying storage tier?
Question 24 of 60
1 point(s)
A company wants to start using the Azure Databricks service. They want to ensure that the Databricks clusters remain available even at the time of regional Azure datacenter outages. Which of the following could be used as the redundancy type to fulfill this requirement?
Question 25 of 60
1 point(s)
A company wants to use the Azure SQL database service. Business apps will be accessing the database. The application data must be available in the event of a region-wide outage. Below are the other key requirements.
• Data must be available in the secondary region if the primary region goes down
• The storage and compute layers for the SQL database must be integrated and replicated together
Which of the following would you use as the Service tier for the database?
Question 26 of 60
1 point(s)
A company wants to use the Azure SQL database service. Business apps will be accessing the database. The application data must be available in the event of a region-wide outage. Below are the other key requirements.
Data must be available in the secondary region if the primary region goes down.
The storage and compute layers for the SQL database must be integrated and replicated together.
Which of the following would you use as the redundancy type?
Question 27 of 60
1 point(s)
A company is planning to use the Azure SQL data warehouse service. Data would be uploaded to the data warehouse every week. Every time the data is uploaded, checks would be made to ensure that the data is not corrupted. If the data is corrupted, the uploaded data has to be removed. The upload process and data corruption check process must not impact the processes running against the warehouse.
The company decides to configure database-level auditing and set a retention period as part of the implementation process.
Would this meet the requirement?
Question 28 of 60
1 point(s)
A company is planning to use the Azure SQL data warehouse service. Data would be uploaded to the data warehouse every week. Every time the data is uploaded, checks would be made to ensure that the data is not corrupted. If the data is corrupted, the uploaded data has to be removed. The upload process and data corruption check process must not impact the processes running against the warehouse.
The company decides to create user-defined restore points before the data is uploaded. And then delete the restore point after the data corruption checks are complete.
Would this meet the requirement?
Question 29 of 60
1 point(s)
A company is planning to use the Azure SQL data warehouse service. Data would be uploaded to the data warehouse every week. Every time the data is uploaded, checks would be made to ensure that the data is not corrupted. If the data is corrupted, the uploaded data has to be removed. The upload process and data corruption check process must not impact the processes running against the warehouse.
The company decides to configure transactions and then perform a rollback if data corruption is detected.
Would this meet the requirement?
Question 30 of 60
1 point(s)
A company wants to engineer a solution. The solution would have the following requirements.
• Ingest data from an on-premise SQL Server
• Create pipelines that can integrate data and also run notebooks
• Be able to develop notebooks that can be used to transform data
• Be able to load the data into a massive parallel processing data for analysis
Which of the following would you use as the service to integrate the on-premise data onto the cloud?
Question 31 of 60
1 point(s)
A company wants to engineer a solution. The solution would have the following requirements.
• Ingest data from an on-premise SQL Server
• Create pipelines that can integrate data and also run notebooks
• Be able to develop notebooks that can be used to transform data
• Be able to load the data into a massive parallel processing data for analysis
Which of the following would you use as the service to develop notebooks to transform the data?
Question 32 of 60
1 point(s)
A company wants to engineer a solution. The solution would have the following requirements.
• Ingest data from an on-premise SQL Server
• Create pipelines that can integrate data and also run notebooks
• Be able to develop notebooks that can be used to transform data
• Be able to load the data into a massive parallel processing data for analysis
Which of the following would you use as the service to run notebooks? Select 2 options.
Question 33 of 60
1 point(s)
A company wants to engineer a solution. The solution would have the following requirements.
• Ingest data from an on-premise SQL Server
• Create pipelines that can integrate data and also run notebooks
• Be able to develop notebooks that can be used to transform data
• Be able to load the data into a massive parallel processing data for analysis
Which of the following would you use as the service to load the data?
Question 34 of 60
1 point(s)
A company wants to engineer a solution. The solution would have the following requirements.
• Ingest data from an on-premise SQL Server
• Create pipelines that can integrate data and also run notebooks
• Be able to develop notebooks that can be used to transform data
• Be able to load the data into a massive parallel processing data for analysis
Which of the following would you use as the service to store the transformed data?
Question 35 of 60
1 point(s)
Your company currently has a solution in place. This solution consists of streaming data being sent to Azure Event Hubs. The data is then stored in Azure Blob storage. The data contains social media posts.
You have to count the number of times the keyword IPSpecialist is mentioned in each post every 30 seconds. The data then needs to be available to Microsoft BI in near real-time.
You have to implement the new requirement for the solution.
You decide to use Azure Databricks to create a Scala notebook. You then create a structured streaming job to connect to the event hub. This would count the number of keywords in the post. The number is then written to a Delta table. You then go ahead to consume the data in PowerBI by using DirectQuery Mode.
Would this fulfill the requirement?
Question 36 of 60
1 point(s)
Your company currently has a solution in place. This solution consists of streaming data being sent to Azure Event Hubs. The data is then stored in Azure Blob storage. The data contains social media posts.
You have to count the number of times the keyword IPSpecialist is mentioned in each post every 30 seconds. The data then needs to be available to Microsoft BI in near real-time.
You have to implement the new requirement for the solution.
You decide to create an Azure Stream Analytics job. This would use Azure Event Hubs as the input stream. This would count the keywords and send the data to an Azure SQL Database. The data is then consumed in PowerBI by using DirectQuery Mode.
Would this fulfill the requirement?
Question 37 of 60
1 point(s)
Your company currently has a solution in place. This solution consists of streaming data being sent to Azure Event Hubs. The data is then stored in Azure Blob storage. The data contains social media posts.
You have to count the number of times the keyword IPSpecialist is mentioned in each post every 30 seconds. The data then needs to be available to Microsoft BI in near real-time.
You have to implement the new requirement for the solution.
You plan to use Azure Data Factory and an event trigger to detect when new blobs are added to the storage account. You then filter the data in Azure Data Factory and then send the data to an Azure SQL Database. The data is then consumed in PowerBI by using DirectQuery Mode.
Would this fulfill the requirement?
Question 38 of 60
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You have to choose the right service for storing image tagging data. Which of the following should be used to fulfill this requirement?
Question 39 of 60
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You need to ensure that the following requirement is met.
“A proper analytical processing solution must be in place for customer-related data.”
Which of the following would you use for this requirement?
Question 40 of 60
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You need to meet the storage requirements for the image tagging data. Which of the following would you configure for the data store in the Manchester location?
Question 41 of 60
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You need to meet the storage requirements for the image tagging data. Which of the following would you configure for the data store in the Singapore location?
Question 42 of 60
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You need to meet the storage requirements for the image tagging data. Which of the following would you configure for the data store in the Melbourne location?
Question 43 of 60
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You have to ensure that the security requirements are met for the tagging data. Which of the following would you implement for this requirement?
Question 44 of 60
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations.
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You have to ensure that the security requirements are met for the customer data. Which of the following would you implement for this requirement?
Question 45 of 60
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations.
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You need to comply with the following requirement for the customer data.
“There should be a facility to backup data if disaster recovery is required.”
Which of the following would you implement for this requirement?
Question 46 of 60
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You need to decide on a storage solution for the images. Which of the following would you choose for this requirement?
Question 47 of 60
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You need to allow users from the on-premise network to access the Azure SQL database. Which of the following would you set for this requirement?
Question 48 of 60
1 point(s)
A company wants to set up a set of data stores on Azure. Each datastore has different requirements.
• Datastore1: This datastore must be able to store JSON related data. It must also have the ability to replicate data to multiple regions.
• Datastore2: This would behave as an OLTP store.
• Datastore3: On this data store, one should be able to run queries across petabytes of data.
• Datastore4: This store should be able to ingest large amounts of images per day.
Which of the following technology would you use for Datastore1?
Question 49 of 60
1 point(s)
A company wants to set up a set of data stores on Azure. Each datastore has different requirements.
• Datastore1: This datastore must be able to store JSON related data. It must also have the ability to replicate data to multiple regions.
• Datastore2: This would behave as an OLTP store.
• Datastore3: On this data store, one should be able to run queries across petabytes of data.
• Datastore4: This store should be able to ingest large amounts of images per day.
Which of the following technology would you use for Datastore2?
Question 50 of 60
1 point(s)
A company wants to set up a set of data stores on Azure. Each datastore has different requirements.
• Datastore1: This datastore must be able to store JSON related data. It must also have the ability to replicate data to multiple regions.
• Datastore2: This would behave as an OLTP store.
• Datastore3: On this data store, one should be able to run queries across petabytes of data.
• Datastore4: This store should be able to ingest large amounts of images per day.
Which of the following technology would you use for Datastore3?
Question 51 of 60
1 point(s)
A company wants to set up a set of data stores on Azure. Each datastore has different requirements.
• Datastore1: This datastore must be able to store JSON related data. It must also have the ability to replicate data to multiple regions.
• Datastore2: This would behave as an OLTP store.
• Datastore3: On this data store, one should be able to run queries across petabytes of data.
• Datastore4: This store should be able to ingest large amounts of images per day.
Which of the following technology would you use for Datastore4?
Question 52 of 60
1 point(s)
A company plans to use the Azure Databricks service. They want to create persistent clusters that would support auto-scaling for analytical processes.
The company decides to create a Standard cluster.
Would this fulfill the requirement?
Question 53 of 60
1 point(s)
A company plans to use the Azure Databricks service. They want to create persistent clusters that would support auto-scaling for analytical processes.
The company decides to create a High concurrency cluster.
Would this fulfill the requirement?
Question 54 of 60
1 point(s)
A company plans to use the Azure Databricks service. They want to create persistent clusters that would support auto-scaling for analytical processes.
The company decides to create a Premium cluster.
Would this fulfill the requirement?
Question 55 of 60
1 point(s)
A company is designing a complete end-to-end solution for data analytics.
The overall architecture is given below.
• Azure Event Hubs would be used to ingest data from multiple devices.
• The data needs to be processed by Service A and sent to relational store services by Service B.
• Every month, an ETL service (Service C) needs to run and store the output data in a columnar data store hosted by Service D.
Which of the following would you use as Service A?
Question 56 of 60
1 point(s)
A company is designing a complete end-to-end solution for data analytics.
The overall architecture is given below.
• Azure Event Hubs would be used to ingest data from multiple devices.
• The data needs to be processed by Service A and sent to relational store services by Service B.
• Every month, an ETL service (Service C) needs to run and store the output data in a columnar data store hosted by Service D.
Which of the following would you use as Service B?
Question 57 of 60
1 point(s)
A company is designing a complete end-to-end solution for data analytics.
The overall architecture is given below.
• Azure Event Hubs would be used to ingest data from multiple devices.
• The data needs to be processed by Service A and sent to relational store services by Service B.
• Every month, an ETL service (Service C) needs to run and store the output data in a columnar data store hosted by Service D.
Which of the following would you use as Service C?
Question 58 of 60
1 point(s)
A company is designing a complete end-to-end solution for data analytics.
The overall architecture is given below.
• Azure Event Hubs would be used to ingest data from multiple devices.
• The data needs to be processed by Service A and sent to relational store services by Service B.
• Every month, an ETL service (Service C) needs to run and store the output data in a columnar data store hosted by Service D.
Which of the following would you use as Service D?
Question 59 of 60
1 point(s)
A company is planning to set an Azure SQL database. The database contains a table that will be storing sensitive Personally Identifiable Information (PII) data. The company wants to have the ability to track and store all the queries that are executed against the PII data. The company database administrator decides to add classifications to the columns that contain sensitive data. Auditing is also turned on for the database.
Would this fulfill the requirement?
Question 60 of 60
1 point(s)
A company is planning to set an Azure SQL database. The database contains a table that will be storing sensitive Personally Identifiable Information (PII) data. The company wants to have the ability to track and store all the queries that are executed against the PII data. The company database administrator decides to create a SELECT trigger on the table in the database. This trigger will write data to a new table in the database. A stored procedure would then be executed to lookup column classifications and perform joins.
Would this fulfill the requirement?