You want to make use of Azure Stream Analytics. The Stream Analytics instance will be receiving data from IoT-enabled devices. You need to send the data onto Cosmos DB.
Which of the following would you need to set in Azure Stream Analytics?
Correct
Incorrect
Question 2 of 60
2. Question
1 point(s)
You want to make use of Azure Stream Analytics. The Stream Analytics instance will be receiving data from IoT enabled devices. You need to send the data to Cosmos DB.
Which of the following needs to be created in Cosmos DB beforehand?
Correct
Incorrect
Question 3 of 60
3. Question
1 point(s)
A company wants to design a data processing system. Data would be ingested via Kafka streams into Azure Data Lake Storage. The data needs to be processed by an Apache Spark-based analytics service.
The company decides to use Azure SQL Data Warehouse as the analytics service.
Would this fulfill the requirement?
Correct
Incorrect
Question 4 of 60
4. Question
1 point(s)
A company wants to design a data processing system. Data would be ingested via Kafka streams into Azure Data Lake Storage. The data needs to be processed by an Apache Spark-based analytics service.
The company decides to use Azure Databricks as the analytics service.
Would this fulfill the requirement?
Correct
Incorrect
Question 5 of 60
5. Question
1 point(s)
A company wants to design a data processing system. Data would be ingested via Kafka streams into Azure Data Lake Storage. The data needs to be processed by an Apache Spark-based analytics service.
The company decides to use Azure Stream Analytics as the analytics service.
Would this fulfill the requirement?
Correct
Incorrect
Question 6 of 60
6. Question
1 point(s)
A company wants to design a data processing system. Data would be ingested via Kafka streams into Azure Data Lake Storage. The data needs to be processed by an Apache Spark-based analytics service.
The company decides to use the Azure Analysis service as the analytics service.
Would this fulfill the requirement?
Correct
Incorrect
Question 7 of 60
7. Question
1 point(s)
A company wants to deploy a Cosmos DB account. The data within the account will be used by data engineers situated across the world. You need to ensure that data engineers worldwide can access the data for a read operation with the least amount of latency. You also need to ensure that costs are minimized. Which of the following would you implement for this requirement?
Correct
Incorrect
Question 8 of 60
8. Question
1 point(s)
You have an Azure Data Lake Storage Gen 2 account. You have to grant permissions to a specific application for a limited time period. Which of the following can you use for this requirement?
Correct
Incorrect
Question 9 of 60
9. Question
1 point(s)
A company is planning to set an Azure SQL Database. The database contains tables and columns that contain sensitive data. The company wants to have a solution in place that would accomplish the following requirements.
• Ensure the database is encrypted at rest.
• Ensure that when sensitive data is accessed from the columns, it is encrypted in transit.
Which of the following would you use for the following requirement?
“Ensure that when sensitive data is accessed from the columns, it is encrypted in transit”
Correct
Incorrect
Question 10 of 60
10. Question
1 point(s)
You have an Azure SQL database. You need to provide an Azure AD group read access to the database. Which of the following would you use to provide access?
Correct
Incorrect
Question 11 of 60
11. Question
1 point(s)
You need to design a solution that would use Azure Functions. The function would be used to process data that is uploaded to Azure Blob storage. You have to ensure that the following requirements are met.
• The solution must have support for 1 million blobs
• The solution must scale automatically
• Costs must be minimized
Which of the following would you recommend for this requirement?
Correct
Incorrect
Question 12 of 60
12. Question
1 point(s)
A company wants to design a solution that would support the ingestion and analysis of log files in real-time. Which of the following would you implement for this requirement? Choose 2 answers from the options given below.
Correct
Incorrect
Question 13 of 60
13. Question
1 point(s)
A company is planning to design a solution in Azure. The solution would be based on the Kappa architecture as shown below.
Which of the following could be used for Layer 2?
Correct
Incorrect
Question 14 of 60
14. Question
1 point(s)
A company wants to make use of an Azure Databricks interactive cluster. The cluster would be configured for auto-termination. The company wants to ensure that the cluster configuration remains indefinitely after the cluster is terminated. The company also wants to ensure that costs are minimized when implementing the solution. Which of the following would you implement for this requirement?
Correct
Incorrect
Question 15 of 60
15. Question
1 point(s)
A company wants to use an Azure Data Lake Storage account to store CSV files. These files will be organized into department wise folders. The company wants to ensure that data is configured in such a way that users will only see files in their respective department folders.
They decide to disable hierarchical namespace and use access control lists.
Would this fulfill the requirement?
Correct
Incorrect
Question 16 of 60
16. Question
1 point(s)
A company wants to use an Azure Data Lake Storage account to store CSV files. These files will be organized into department wise folders. The company wants to ensure that data is configured in such a way that users will only see files in their respective department folders.
They decide to enable hierarchical namespace and use RBAC.
Would this fulfill the requirement?
Correct
Incorrect
Question 17 of 60
17. Question
1 point(s)
A company wants to use an Azure Data Lake Storage account to store CSV files. These files will be organized into department wise folders. The company wants to ensure that data is configured in such a way that users will only see files in their respective department folders.
They decide to disable the hierarchical namespace and use RBAC.
Would this fulfill the requirement?
Correct
Incorrect
Question 18 of 60
18. Question
1 point(s)
A company wants to implement a big data store. Below are the key requirements for the data store.
• It should have support for a hierarchical file system
• It should be optimized for parallel analytic workloads.
• It should provide unlimited account sizes.
Which of the following would you implement for this requirement?
Correct
Incorrect
Question 19 of 60
19. Question
1 point(s)
A company wants to implement an Azure Cosmos DB database that would support data storage for vertices and edges. Which of the following would you use as the underlying Cosmos DB API?
Correct
Incorrect
Question 20 of 60
20. Question
1 point(s)
A company wants to implement a data store that would meet the following requirements.
• Be able to receive thousands of files per minute
• The files would be in different file formats – JSON, text, and CSV
• The files would eventually be processed, transformed, and loaded into an Azure SQL data warehouse
Which of the following would you use as the underlying data store?
Correct
Incorrect
Question 21 of 60
21. Question
1 point(s)
A company wants to migrate data from an on-premise Mongo DB Instance to Azure Cosmos DB – Mongo API. During the testing phase, they discovered that too much time is being taken for the migration process. Which of the following can they implement to reduce the migration time? Choose 2 answers from the options given below.
Correct
Incorrect
Question 22 of 60
22. Question
1 point(s)
A company wants to deploy a set of databases using the Azure SQL database service. They want to organize the databases into separate groups based on database usage. They also want to have the ability to define the maximum limit on the resources that would be able for each group. Which of the following could be recommended to fulfill this requirement?
Correct
Incorrect
Question 23 of 60
23. Question
1 point(s)
A company wants to create an Azure storage account. Below are the requirements for the objects in the storage account.
• Storage costs should be minimized
• The storage account will be used to hold objects which are infrequently accessed
• The data in the storage account will be stored for at least 30 days
• Data availability must be guaranteed at an SLA of 99%
Which of the following could be used as the underlying storage tier?
Correct
Incorrect
Question 24 of 60
24. Question
1 point(s)
A company wants to start using the Azure Databricks service. They want to ensure that the Databricks clusters remain available even at the time of regional Azure datacenter outages. Which of the following could be used as the redundancy type to fulfill this requirement?
Correct
Incorrect
Question 25 of 60
25. Question
1 point(s)
A company wants to use the Azure SQL database service. Business apps will be accessing the database. The application data must be available in the event of a region-wide outage. Below are the other key requirements.
• Data must be available in the secondary region if the primary region goes down
• The storage and compute layers for the SQL database must be integrated and replicated together
Which of the following would you use as the Service tier for the database?
Correct
Incorrect
Question 26 of 60
26. Question
1 point(s)
A company wants to use the Azure SQL database service. Business apps will be accessing the database. The application data must be available in the event of a region-wide outage. Below are the other key requirements.
Data must be available in the secondary region if the primary region goes down.
The storage and compute layers for the SQL database must be integrated and replicated together.
Which of the following would you use as the redundancy type?
Correct
Incorrect
Question 27 of 60
27. Question
1 point(s)
A company is planning to use the Azure SQL data warehouse service. Data would be uploaded to the data warehouse every week. Every time the data is uploaded, checks would be made to ensure that the data is not corrupted. If the data is corrupted, the uploaded data has to be removed. The upload process and data corruption check process must not impact the processes running against the warehouse.
The company decides to configure database-level auditing and set a retention period as part of the implementation process.
Would this meet the requirement?
Correct
Incorrect
Question 28 of 60
28. Question
1 point(s)
A company is planning to use the Azure SQL data warehouse service. Data would be uploaded to the data warehouse every week. Every time the data is uploaded, checks would be made to ensure that the data is not corrupted. If the data is corrupted, the uploaded data has to be removed. The upload process and data corruption check process must not impact the processes running against the warehouse.
The company decides to create user-defined restore points before the data is uploaded. And then delete the restore point after the data corruption checks are complete.
Would this meet the requirement?
Correct
Incorrect
Question 29 of 60
29. Question
1 point(s)
A company is planning to use the Azure SQL data warehouse service. Data would be uploaded to the data warehouse every week. Every time the data is uploaded, checks would be made to ensure that the data is not corrupted. If the data is corrupted, the uploaded data has to be removed. The upload process and data corruption check process must not impact the processes running against the warehouse.
The company decides to configure transactions and then perform a rollback if data corruption is detected.
Would this meet the requirement?
Correct
Incorrect
Question 30 of 60
30. Question
1 point(s)
A company wants to engineer a solution. The solution would have the following requirements.
• Ingest data from an on-premise SQL Server
• Create pipelines that can integrate data and also run notebooks
• Be able to develop notebooks that can be used to transform data
• Be able to load the data into a massive parallel processing data for analysis
Which of the following would you use as the service to integrate the on-premise data onto the cloud?
Correct
Incorrect
Question 31 of 60
31. Question
1 point(s)
A company wants to engineer a solution. The solution would have the following requirements.
• Ingest data from an on-premise SQL Server
• Create pipelines that can integrate data and also run notebooks
• Be able to develop notebooks that can be used to transform data
• Be able to load the data into a massive parallel processing data for analysis
Which of the following would you use as the service to develop notebooks to transform the data?
Correct
Incorrect
Question 32 of 60
32. Question
1 point(s)
A company wants to engineer a solution. The solution would have the following requirements.
• Ingest data from an on-premise SQL Server
• Create pipelines that can integrate data and also run notebooks
• Be able to develop notebooks that can be used to transform data
• Be able to load the data into a massive parallel processing data for analysis
Which of the following would you use as the service to run notebooks? Select 2 options.
Correct
Incorrect
Question 33 of 60
33. Question
1 point(s)
A company wants to engineer a solution. The solution would have the following requirements.
• Ingest data from an on-premise SQL Server
• Create pipelines that can integrate data and also run notebooks
• Be able to develop notebooks that can be used to transform data
• Be able to load the data into a massive parallel processing data for analysis
Which of the following would you use as the service to load the data?
Correct
Incorrect
Question 34 of 60
34. Question
1 point(s)
A company wants to engineer a solution. The solution would have the following requirements.
• Ingest data from an on-premise SQL Server
• Create pipelines that can integrate data and also run notebooks
• Be able to develop notebooks that can be used to transform data
• Be able to load the data into a massive parallel processing data for analysis
Which of the following would you use as the service to store the transformed data?
Correct
Incorrect
Question 35 of 60
35. Question
1 point(s)
Your company currently has a solution in place. This solution consists of streaming data being sent to Azure Event Hubs. The data is then stored in Azure Blob storage. The data contains social media posts.
You have to count the number of times the keyword IPSpecialist is mentioned in each post every 30 seconds. The data then needs to be available to Microsoft BI in near real-time.
You have to implement the new requirement for the solution.
You decide to use Azure Databricks to create a Scala notebook. You then create a structured streaming job to connect to the event hub. This would count the number of keywords in the post. The number is then written to a Delta table. You then go ahead to consume the data in PowerBI by using DirectQuery Mode.
Would this fulfill the requirement?
Correct
Incorrect
Question 36 of 60
36. Question
1 point(s)
Your company currently has a solution in place. This solution consists of streaming data being sent to Azure Event Hubs. The data is then stored in Azure Blob storage. The data contains social media posts.
You have to count the number of times the keyword IPSpecialist is mentioned in each post every 30 seconds. The data then needs to be available to Microsoft BI in near real-time.
You have to implement the new requirement for the solution.
You decide to create an Azure Stream Analytics job. This would use Azure Event Hubs as the input stream. This would count the keywords and send the data to an Azure SQL Database. The data is then consumed in PowerBI by using DirectQuery Mode.
Would this fulfill the requirement?
Correct
Incorrect
Question 37 of 60
37. Question
1 point(s)
Your company currently has a solution in place. This solution consists of streaming data being sent to Azure Event Hubs. The data is then stored in Azure Blob storage. The data contains social media posts.
You have to count the number of times the keyword IPSpecialist is mentioned in each post every 30 seconds. The data then needs to be available to Microsoft BI in near real-time.
You have to implement the new requirement for the solution.
You plan to use Azure Data Factory and an event trigger to detect when new blobs are added to the storage account. You then filter the data in Azure Data Factory and then send the data to an Azure SQL Database. The data is then consumed in PowerBI by using DirectQuery Mode.
Would this fulfill the requirement?
Correct
Incorrect
Question 38 of 60
38. Question
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You have to choose the right service for storing image tagging data. Which of the following should be used to fulfill this requirement?
Correct
Incorrect
Question 39 of 60
39. Question
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You need to ensure that the following requirement is met.
“A proper analytical processing solution must be in place for customer-related data.”
Which of the following would you use for this requirement?
Correct
Incorrect
Question 40 of 60
40. Question
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You need to meet the storage requirements for the image tagging data. Which of the following would you configure for the data store in the Manchester location?
Correct
Incorrect
Question 41 of 60
41. Question
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You need to meet the storage requirements for the image tagging data. Which of the following would you configure for the data store in the Singapore location?
Correct
Incorrect
Question 42 of 60
42. Question
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You need to meet the storage requirements for the image tagging data. Which of the following would you configure for the data store in the Melbourne location?
Correct
Incorrect
Question 43 of 60
43. Question
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You have to ensure that the security requirements are met for the tagging data. Which of the following would you implement for this requirement?
Correct
Incorrect
Question 44 of 60
44. Question
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations.
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You have to ensure that the security requirements are met for the customer data. Which of the following would you implement for this requirement?
Correct
Incorrect
Question 45 of 60
45. Question
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations.
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You need to comply with the following requirement for the customer data.
“There should be a facility to backup data if disaster recovery is required.”
Which of the following would you implement for this requirement?
Correct
Incorrect
Question 46 of 60
46. Question
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You need to decide on a storage solution for the images. Which of the following would you choose for this requirement?
Correct
Incorrect
Question 47 of 60
47. Question
1 point(s)
Case Study
Overview
A company is responsible for designing a new data engineering solution. The solution would be used by a media company that has offices in the following locations
• New York
• Manchester
• Singapore
• Melbourne
Current Environment
• The current solution stores millions of images on a physical server that is located in the New York office
• Around 2 TB of images are added every day
• Currently the images are not being organized properly
• It becomes difficult to search for images
• The images need to have object and color tags generated
• The tags are stored in a document database that is queries by SQL
• The New York office also has a Microsoft SQL Server database that stores customer data
Proposed Environment
• All of the images and any customer data needs to be transferred to Azure
• On-premise servers need to be decommissioned
• A proper analytical processing solution must be in place for customer related data
• There should be a proper image object and color tagging solution in place
• All expenses must be minimized
• The tagging data must be uploaded from the New York Office location
• Tagging data must be replicated to regions where other offices are located
• The customer data must be analyzed using Spark clusters
• The cluster should allow for parallel processing of data
• Power BI must be used to visualize transformed customer data
• There should be a facility to backup data if disaster recovery is required
• All the data in the cloud must be encrypted at rest and in transit
• Images must be replicated globally
You need to allow users from the on-premise network to access the Azure SQL database. Which of the following would you set for this requirement?
Correct
Incorrect
Question 48 of 60
48. Question
1 point(s)
A company wants to set up a set of data stores on Azure. Each datastore has different requirements.
• Datastore1: This datastore must be able to store JSON related data. It must also have the ability to replicate data to multiple regions.
• Datastore2: This would behave as an OLTP store.
• Datastore3: On this data store, one should be able to run queries across petabytes of data.
• Datastore4: This store should be able to ingest large amounts of images per day.
Which of the following technology would you use for Datastore1?
Correct
Incorrect
Question 49 of 60
49. Question
1 point(s)
A company wants to set up a set of data stores on Azure. Each datastore has different requirements.
• Datastore1: This datastore must be able to store JSON related data. It must also have the ability to replicate data to multiple regions.
• Datastore2: This would behave as an OLTP store.
• Datastore3: On this data store, one should be able to run queries across petabytes of data.
• Datastore4: This store should be able to ingest large amounts of images per day.
Which of the following technology would you use for Datastore2?
Correct
Incorrect
Question 50 of 60
50. Question
1 point(s)
A company wants to set up a set of data stores on Azure. Each datastore has different requirements.
• Datastore1: This datastore must be able to store JSON related data. It must also have the ability to replicate data to multiple regions.
• Datastore2: This would behave as an OLTP store.
• Datastore3: On this data store, one should be able to run queries across petabytes of data.
• Datastore4: This store should be able to ingest large amounts of images per day.
Which of the following technology would you use for Datastore3?
Correct
Incorrect
Question 51 of 60
51. Question
1 point(s)
A company wants to set up a set of data stores on Azure. Each datastore has different requirements.
• Datastore1: This datastore must be able to store JSON related data. It must also have the ability to replicate data to multiple regions.
• Datastore2: This would behave as an OLTP store.
• Datastore3: On this data store, one should be able to run queries across petabytes of data.
• Datastore4: This store should be able to ingest large amounts of images per day.
Which of the following technology would you use for Datastore4?
Correct
Incorrect
Question 52 of 60
52. Question
1 point(s)
A company plans to use the Azure Databricks service. They want to create persistent clusters that would support auto-scaling for analytical processes.
The company decides to create a Standard cluster.
Would this fulfill the requirement?
Correct
Incorrect
Question 53 of 60
53. Question
1 point(s)
A company plans to use the Azure Databricks service. They want to create persistent clusters that would support auto-scaling for analytical processes.
The company decides to create a High concurrency cluster.
Would this fulfill the requirement?
Correct
Incorrect
Question 54 of 60
54. Question
1 point(s)
A company plans to use the Azure Databricks service. They want to create persistent clusters that would support auto-scaling for analytical processes.
The company decides to create a Premium cluster.
Would this fulfill the requirement?
Correct
Incorrect
Question 55 of 60
55. Question
1 point(s)
A company is designing a complete end-to-end solution for data analytics.
The overall architecture is given below.
• Azure Event Hubs would be used to ingest data from multiple devices.
• The data needs to be processed by Service A and sent to relational store services by Service B.
• Every month, an ETL service (Service C) needs to run and store the output data in a columnar data store hosted by Service D.
Which of the following would you use as Service A?
Correct
Incorrect
Question 56 of 60
56. Question
1 point(s)
A company is designing a complete end-to-end solution for data analytics.
The overall architecture is given below.
• Azure Event Hubs would be used to ingest data from multiple devices.
• The data needs to be processed by Service A and sent to relational store services by Service B.
• Every month, an ETL service (Service C) needs to run and store the output data in a columnar data store hosted by Service D.
Which of the following would you use as Service B?
Correct
Incorrect
Question 57 of 60
57. Question
1 point(s)
A company is designing a complete end-to-end solution for data analytics.
The overall architecture is given below.
• Azure Event Hubs would be used to ingest data from multiple devices.
• The data needs to be processed by Service A and sent to relational store services by Service B.
• Every month, an ETL service (Service C) needs to run and store the output data in a columnar data store hosted by Service D.
Which of the following would you use as Service C?
Correct
Incorrect
Question 58 of 60
58. Question
1 point(s)
A company is designing a complete end-to-end solution for data analytics.
The overall architecture is given below.
• Azure Event Hubs would be used to ingest data from multiple devices.
• The data needs to be processed by Service A and sent to relational store services by Service B.
• Every month, an ETL service (Service C) needs to run and store the output data in a columnar data store hosted by Service D.
Which of the following would you use as Service D?
Correct
Incorrect
Question 59 of 60
59. Question
1 point(s)
A company is planning to set an Azure SQL database. The database contains a table that will be storing sensitive Personally Identifiable Information (PII) data. The company wants to have the ability to track and store all the queries that are executed against the PII data. The company database administrator decides to add classifications to the columns that contain sensitive data. Auditing is also turned on for the database.
Would this fulfill the requirement?
Correct
Incorrect
Question 60 of 60
60. Question
1 point(s)
A company is planning to set an Azure SQL database. The database contains a table that will be storing sensitive Personally Identifiable Information (PII) data. The company wants to have the ability to track and store all the queries that are executed against the PII data. The company database administrator decides to create a SELECT trigger on the table in the database. This trigger will write data to a new table in the database. A stored procedure would then be executed to lookup column classifications and perform joins.
Would this fulfill the requirement?
Correct
Incorrect
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.