Supercharge Your Cloud Computing, Cybersecurity, Networking, Microsoft and AI Skills with Our Premium Plan! BE A MEMBER NOW!
No Course Found
0 of 60 Questions completed
Questions:
You have already completed the quiz before. Hence you can not start it again.
Quiz is loading…
You must sign in or sign up to start the quiz.
You must first complete the following:
0 of 60 Questions answered correctly
Your time:
Time has elapsed
You have reached 0 of 0 point(s), (0)
Earned Point(s): 0 of 0, (0) 0 Essay(s) Pending (Possible Point(s): 0)
A company is planning to create an Azure SQL database to support a mission-critical application. The application needs to be highly available and not have any performance degradation during maintenance windows. Which of the following technologies can be used to implement this solution? Choose 3 answers from the options given below.
A company has an Azure SQL data warehouse. They want to use PolyBase to retrieve data from an Azure Blob storage account and ingest into the Azure SQL data warehouse. The files are stored in parquet format. The data needs to be loaded into a table called ipslab_sales. Which of the following actions need to be performed to implement this requirement? Choose 4 answers from the options given below.
A company wants to create an in-memory batch processing solution. They want to provision an HDinsight cluster for the batch processing solution. You need to complete the below PowerShell snippet for implementing the solution: Area 1 -Name Sipslabclustername -Context SdefaultStorageContext SipslabclusterSizelInNodes = “4″ $ipslabclusterVersion = “3.6” SipslabclusterType = Area 2 $ipslabclusterOS = “Linux” Area 3 -ResourceGroupName “ipslab-rg” * -ClusterName SipslabclusterName * -Location Slocation * -ClusterSizelInNodes $ipslabclusterSizeinNodes * -ClusterType SipslabclusterType * -OSType SipslabclusterOS * Version $ipslabclusterVersion ° -HttpCredential SipslabhttpCredential * -DefaultStorageAccountName “SdefaultStorageAccountName.blob.core.windows.net” * -DefaultStorageAccountKey SdefaultStorageAccountKey * -DefaultStorageContainer SipslabclusterName * -SshCredential SipslabsshCredentials Which of the following would go into Area 1?
You have to deploy resources on Azure HD Insight for a batch processing job. The batch processing must run daily and must scale to minimize costs. You also be able to monitor cluster performance. You need to decide on a tool that will monitor the clusters and provide information on suggestions on how to scale. You decide to download the Azure HDinsight cluster logs by using Azure PowerShell. Would this fulfil the requirement?
A company has a storage account named ipslabstore2020. They want to ensure that they can recover a blob object if it was deleted in the last 10 days. Which of the following would they implement for this requirement?
Your team has created a new Azure Data Factory environment. You have to analyze the pipeline executions. Trends need to be identified in execution duration over the past 30 days. You need to create a solution that would ensure that data can be queried from Azure Log Analytics. Which of the following would you choose as the Log type when setting up the diagnostic setting for Azure Data Factory?
Your team has created a new Azure Data Factory environment. You have to analyze the pipeline executions. Trends need to be identified in execution duration over the past 30 days. You need to create a solution that would ensure that data can be queried from Azure Log Analytics. Which of the following would you use as the storage location when setting up the diagnostic setting for Azure Data Factory?
You have to develop a solution that will make use of Azure Stream Analytics. The solution will perform data streaming and will also need a reference data store. Which of the following could be used as the input type for the reference data store?
You have to develop a solution using Azure Stream Analytics. The stream will be used to receive Twitter data from Azure Event Hubs. The output would be sent to an Azure Blob storage account. The key requirement is to output the number of tweets during the last 3 minutes every 3 minutes. Each tweet must be counted only once. Which of the following would you use as the windowing function?
A company currently has an Azure SQL database. The company wants to create an offline exported copy of the database. This is so that users can work with the data offline when they do not have any Internet connection on their laptops. Which of the following are ways that can be used to create the exported copy? Choose 3 answers from the options given below.
A company has an Azure Databricks workspace. The workspace will contain three types of workloads. • One workload for data engineers that would make use of Python and SQL. • One workload for jobs that would run notebooks that would make use of Python, Spark. Scala and SQL. • One workload that data scientists would use to perform ad hoc analysis in Scala and R. • The following standards need to be adhered to in the different Databricks environments. • The data engineers need to share a cluster. • The cluster that runs jobs would be triggered via a request. The data scientists and data engineers would provide package notebooks that would need to be deployed to the cluster. • There are three data scientists currently. Every data scientist has to be assigned their own cluster. The cluster needs to terminate automatically after 120 minutes of inactivity. You have to create new Databrick clusters for the workloads. You decide to create a standard cluster for each data scientist. a standard cluster for the data engineers. and a High Concurrency cluster for the jobs. Would this implementation fulfill the requirement?
A company has an Azure Databricks workspace. The workspace will contain three types of workloads. • One workload for data engineers that would make use of Python and SQL. • One workload for jobs that would run notebooks would make use of Python, Spark. Scala and SQL. • One workload that data scientists would use to perform ad hoc analysis in Scala and R. The following standards need to be adhered to in the different Databricks environments. • The data engineers need to share a cluster. • The cluster that runs jobs would be triggered via a request. The data scientists and data engineers would provide package notebooks that would need to be deployed to the cluster. • There are three data scientists currently. Every data scientist has to be assigned their own cluster. The cluster needs to terminate automatically after 120 minutes of inactivity. You have to create new Databrick clusters for the workloads. You decide to create a High Concurrency cluster for each data scientist. A High Concurrency cluster for the data engineers and a standard cluster for the jobs. Would this implementation fulfill the requirement?
A company has an Azure Databricks workspace. The workspace will contain three types of workloads. • One workload for data engineers that would make use of Python and SQL. • One workload for jobs that would run notebooks would make use of Python. Spark. Scala and SQL. • One workload that data scientists would use to perform ad hoc analysis in Scala and R. The following standards need to be adhered to in the different Databricks environments. • The data engineers need to share a cluster. • The cluster that runs jobs would be triggered via a request. The data scientists and data engineers would provide package notebooks that would need to be deployed to the cluster. • There are three data scientists currently. Every data scientist has to be assigned their own cluster. The cluster needs to terminate automatically after 120 minutes of inactivity. • You have to create new Databrick clusters for the workloads. You decide to create a Standard cluster for each data scientist. A High Concurrency cluster for the data engineers and a Standard cluster for the jobs. Would this implementation fulfill the requirement?
A company has an Azure Databricks workspace. The workspace will contain three types of workloads. • One workload for data engineers that would make use of Python and SQL. • One workload for jobs that would run notebooks would make use of Python. Spark. Scala and SQL. • One workload that data scientists would use to perform ad hoc analysis in Scala and R. • The following standards need to be adhered to the different Databricks environments. • The data engineers need to share a cluster. • The cluster that runs jobs would be triggered via a request. The data scientists and data engineers would provide package notebooks that would need to be deployed to the cluster. • There are three data scientists currently. Every data scientist has to be assigned their own cluster. The cluster needs to terminate automatically after 120 minutes of inactivity. • You have to create new Databrick clusters for the workloads. You decide to create a Standard cluster for each data scientist. A High Concurrency cluster for the data engineers and a Standard cluster for the jobs. Would this implementation fulfill the requirement?
You have to develop a solution which would perform the following activities. • Ingest twitter-based data into Azure. • Give the ability to visualize real-time Twitter data. Which of the following would you use to implement this solution? Choose 3 answers from the options given below.
A company wants to pull data from an on-premises SQL Server and migrate the data to Azure Blob storage. The company is planning to use Azure Data Factory. Which of the following are steps that would be required to implement this solution? Pick 3 options from the given list.
A company wants to integrate their on-premises Microsoft SQL Server data with Azure SGL database. Here the data must be transformed incrementally. Which of the following can be used to configure a pipeline to copy the data?
Your company is currently using Azure Stream Analytics to monitor devices. The company is now planning to deploy more devices, and all of these devices need to be monitored via the same Azure Stream Analytics instance. You have to ensure that there are enough processing resources to handle the load of the additional devices. Which of the following metric for the Stream Analytics job should you track for this requirement?
A company wants to migrate a set of on-premises Microsoft SQL Server databases to Azure. They want to migrate the databases as a simple Lift and shift process by using backup and restore processes. Which of the following would they use in Azure to host the SQL databases?
You have to design a Hadoop Distributed File System architecture. You are going to be using Microsoft Azure Data Lake as the data storage repository. You have to ensure that the data repository has a resilient data schema. Which of the following would you use to provide data access to clients?
You have to design a Hadoop Distributed File System architecture. You are going to be using Microsoft Azure Data Lake as the data storage repository. You have to ensure that the data repository has a resilient data schema Which of the following would be used to run operations on files and directories on the file system?
You have to design a Hadoop Distributed File System architecture. You are going to be using Microsoft Azure Data Lake as the data storage repository. You have to ensure that the data repository has a resilient data schema Which of the following is used to perform block creation, deletion, and replication?
A company wants to make use of Azure SQL Database with Elastic Pools. They have different customers who will have their own database in the pool. Each customer database has its own peak usage during different periods of the year. You need to consider the best way to implement Azure SQL Database elastic pools to minimize costs. Which of the following is an option you would need to consider when configuring elastic pools?
A company needs to configure data synchronization between them on-premises Microsoft SQL Server database and Azure SQL database. The synchronization process must include the following. • Be able to perform an initial data synchronization to the Azure SQL Database with minimal downtime. • Be able to perform bi-directional synchronization after the initial synchronization is complete. Which of the following would you consider as the synchronization solution?
A company has on-premises Microsoft SQL Server databases at several locations. The company wants to integrate the data in the databases with Microsoft Power BI and Microsoft Azure Logic Apps. You need to implement a solution that would avoid any single point of failure during the connection and transfer of data to the cloud. Latency must also be minimized. The transfer of data between the on-premises databases and Microsoft Azure must be secure. Which of the following would you implement for this requirement?
You need to migrate data from an Azure Blob storage account to an Azure SQL Data warehouse. Which of the following actions do you need to implement for this requirement? Choose 4 answers from the options given below.
You have an Azure storage account named ipsstore4000. Below are the Diagnostic settings configured for the storage account. How long will the logging data be retained for?
Your company has an Azure Data Lake storage account. They want to implement role-based access control (RBAC) so that project members can manage the Azure Data Lake Storage resources. Which of the following actions should you perform for this requirement? Choose 3 answers from the options given below.
A company has an Azure SQL Database and an Azure Blob storage account. They want data to be encrypted at rest on both systems. The company should be able to use its own key. Which of the following would they use to configure security for the Azure SQL Database?
A company has an Azure SQL Database and an Azure Blob storage account. They want data to be encrypted at rest on both systems. The company should be able to use its own key. Which of the following would they use to configure security for the Azure Blob storage account?
A company has a set of Azure SQL Databases. They want to ensure that their IT Security team is informed when any security-related operation occurs on the database. You need to configure Azure Monitor while ensuring administrative efforts are reduced. Which of the following actions would you perform for this requirement? Choose 3 answers from the options given below.
You need to deploy a Microsoft Azure Stream Analytics job for a loT based solution. The solution must minimize latency. The solution must also minimize the bandwidth usage between the job and the IoT device. Which of the following actions must you perform for this requirement? Choose 4 answers from the options given below.
Your company has 2 Azure SQL Databases named ipsdb1 and ipsdb2. Access needs to be configured for these databases from the following nodes • A workstation which has an IP address of §.78.99.4 • A set of IP addresses in the range of 5.78.99.6 – 5.78.99.10 The access needs to be set based on the following permissions • Connections to both of the databases must be allowed from the workstation • The specified IP address range must be allowed to connect to the database ipsdb1 and not ipsdb2 • The Web services in Azure must be able to connect to the database ipsdb1 and not ipsdb2 Which of the following must be set for this requirement? Choose 3 answers from the options given below
A company is using an Azure SQL Data Warehouse Gen2, Users are complaining that performance is slow when they run commonly used queries. They do not report such issues for infrequently used queries. Which of the following should they monitor to find out the source of the performance issues?
A company has implemented a real-time data analysis solution. This solution is making use of Azure Event Hub to ingest the data. The data is then sent to the Azure Stream Analytics cloud job. The cloud job has been configured to use 100 Streaming Units. Which of the following two actions can be performed to optimize the Azure Stream Analytics job’s performance?
Case Study A company wants to use a set of services on Azure. They want to make use of Platform-as-a-service products to create a new data pipeline process. They have the following requirements: Data Ingestion • This layer must provide access to multiple sources. • This layer must provide the ability to orchestrate a workflow. • It must also provide the capability to run SQL Server Integration Service packages. Storage • The storage layer must be optimized for Big Data workloads. • It must provide encryption of data at rest. • There must be no size constraints. Prepare and Train • This layer must provide a fully managed interactive workspace for exploration and visualization. • Here you should be able to program in R, SQL or Scala. • It must provide seamless user authentication with Azure Active Directory. Model and Service • This layer must provide support for SQL language. • It must implement native columnar storage. Which of the following should be used as a technology for the “Data Ingestion” layer?
Case Study A company wants to use a set of services on Azure. They want to make use of Platform-as-a-service products to create a new data pipeline process. They have the following requirements: Data Ingestion • This layer must provide access to multiple sources. • This layer must provide the ability to orchestrate a workflow. • It must also provide the capability to run SQL Server Integration Service packages. Storage • The storage layer must be optimized for Big Data workloads. • It must provide encryption of data at rest. • There must be no size constraints. Prepare and Train • This layer must provide a fully managed interactive workspace for exploration and visualization. • Here you should be able to program in R, SQL, or Scala. • It must provide seamless user authentication with Azure Active Directory. Model and Service • This layer must provide support for SQL language. • It must implement native columnar storage. Which of the following should be used as a technology for the “Storage” layer?
Case Study A company wants to use a set of services on Azure. They want to make use of Platform-as-a-service products to create a new data pipeline process. They have the following requirements: Data Ingestion • This layer must provide access to multiple sources. • This layer must provide the ability to orchestrate a workflow. • It must also provide the capability to run SQL Server Integration Service packages. Storage • The storage layer must be optimized for Big Data workloads. • It must provide encryption of data at rest. • There must be no size constraints. Prepare and Train • This layer must provide a fully managed interactive workspace for exploration and visualization. • Here you should be able to program in R, SQL, or Scala. • It must provide seamless user authentication with Azure Active Directory. Model and Service • This layer must provide support for SQL language. • It must implement native columnar storage. Which of the following should be used as a technology for the “Prepare and Train” layer?
Case Study A company wants to use a set of services on Azure. They want to make use of Platform-as-a-service products to create a new data pipeline process. They have the following requirements: Data Ingestion • This layer must provide access to multiple sources. • This layer must provide the ability to orchestrate a workflow. • It must also provide the capability to run SQL Server Integration Service packages. Storage • The storage layer must be optimized for Big Data workloads. • It must provide encryption of data at rest. • There must be no size constraints. Prepare and Train • This layer must provide a fully managed interactive workspace for exploration and visualization. • Here you should be able to program in R, SQL, or Scala. • It must provide seamless user authentication with Azure Active Directory. Model and Service • This layer must provide support for SQL language. • It must implement native columnar storage. Which of the following should be used as a technology for the “Model and Service” layer?
Your company has an Azure Cosmos DB account that makes use of the SQL API. You have to ensure that all stale data is deleted from the database automatically. Which of the following feature would you use for this requirement?
A company wants to make use of Azure Data Lake Gen 2 storage account. This would be used to store Big Data related to an application. The company wants to implement logging. They decide to create an Azure Automation runbook that would be used to copy events. Would this fulfill the requirement?
A company wants to make use of Azure Data Lake Gen 2 storage account. This would be used to store Big Data related to an application. The company wants to implement logging. They decide to use the information that is stored in Azure Active Directory reports. Would this fulfill the requirement?
A company wants to make use of Azure Data Lake Gen 2 storage account. This would be used to store Big Data related to an application. The company wants to implement logging. They decide to configure Azure Data Lake Storage diagnostics to store the logs and metric data in a storage account. Would this fulfill the requirement?
Case Study Overview Ipslabs is an online training provider. Current Environment The company currently has Microsoft SQL databases that are split into different categories or tiers. Some of the databases are used by Internal users, some by external partners and external distributions.
Below is the list of applications, tiers and their individual requirements.
Below are the current requirements of the company • The databases in Tier 3. Tier 6 to 8 must use a database density on the same server and Elastic pools in cost effective manner. • The Applications must have access to data from internal and external sources whilst ensuring the date is encrypted at rest and in transit. • The databases in Tier 3. Tier 6 to 8 must have a recovery strategy in case whenever the server goes offline. • The Tier 1 applications must have their databases stored on Premium P2 tier • The Tier 1 applications must have their databases stored on Standard $4 tier • Data will be migrated from the on-premises databases to Azure SQL Databases using Azure Data Factory. The pipeline must support continued data movement and migration. • The Application access for Tier 7 and 8 must be restricted to the database only • The Application access for Tier 7 and 8 must be restricted to the database only. • For Tier 4 and Tier 5 databases, the backup strategy must include the following: Transactional log backup every hour Differential backup every day Full back up every week • Backup strategies must be in place for all standalone Azure SQL databases using methods available with Azure SQL databases. • Tier 1 database must implement the following data masking logic For Data type ipslabA – Mask 4 or less string data type characters For Data type ipslabB – Expose the first letter and mask the domain For Data type ipslabC – Mask everything except characters at the beginning and the end • All certificates and keys are internally managed in on-premises data stores • For Tier 2 databases, if there are any conflicts between the data transfer from on-premises, preference should be given to on-premises data. • Monitoring must be setup on every database • Applications with Tiers 6 through 8 must ensure that unexpected resource storage usage is immediately reported to IT data engineers. • Azure SQL Data warehouse would be used to gather data from multiple internal and external databases. • The Azure SQL Data warehouse must be optimized to use data from its cache • The below metrics must be available when it comes to the cache Metric ipslabA – Low cache hit %, high cache usage % Metric ipslabB – Low cache hit %, low cache usage % Metric ipslabC – high cache hit %, high cache usage % • The reporting data for external partners must be stored in Azure storage. The data should be made available during regular business hours in connecting regions. • The reporting for Tier 9 needs to be moved to Event Hubs. • The reporting for Tier 10 needs to be moved to Azure Blobs. • The following issues have been identified in the setup • The External partners have control over the data formats, type and schemas • For External based clients, the queries cannot be changed or optimized. • The Database development staff are familiar with T-SQL language • Because if the size and amount of data, some applications, and reporting features are not performing at SLA levels. Which of the following can be used to process and query the ingested data for the Tier g data?
Case Study
Overview
Ipslabs is an online training provider.
Current Environment
The company currently has Microsoft SQL databases that are split into different categories or tiers. Some of the databases are used by Internal users, some by external partners and external distributions.
Below are the current requirements of the company
The Azure Data Factory instance must meet the requirements to move the data from the On-premise SQL Servers to Azure. Which of the following would you use as the integration runtime?
Case study
The company currently has Microsoft SQL databases that are split into different categories or tiers. Some of the databases are used by Internal users, some by external partners, and external distributions.
The data for the external applications needs to be encrypted at rest. You decide to implement the following steps.
Would these steps fulfill the requirement?
The following issues have been identified in the setup
Which of the following should you use as the masking function for Data type IpslabA?
The company currently has Microsoft SQL databases that are split into different categories or tiers. Some of the databases are used by internal users, some by external partners and external distributions.
Which of the following should you use as the masking function for Data type IpslabB?
Which of the following should you use as the masking function for Data type IpslabC?
You need to implement the following requirement as per the case study.
“The Application access for Tier 7 and 8 must be restricted to the database only”.
Which of the following steps would you implement for this requirement? Choose 3 answers from the options given below.
Case Study Overview Ipslabs is an online training provider. Current Environment The company currently has Microsoft SQL databases that are split into different categories or tiers. Some of the databases are used by internal users, some by external partners and external distributions. Below is the list of applications, tiers and their individual requirements.
Below are the current requirements of the company • The databases in Tier 3. Tier 6 to 8 must use a database density on the same server and Elastic pools in cost effective manner. • The Applications must have access to data from internal and external sources whilst ensuring the date is encrypted at rest and in transit. • The databases in Tier 3. Tier 6 to 8 must have a recovery strategy in case whenever the server goes offline. • The Tier 1 applications must have their databases stored on Premium P2 tier • The Tier 1 applications must have their databases stored on Standard $4 tier • Data will be migrated from the on-premises databases to Azure SQL Databases using Azure Data Factory. The pipeline must support continued data movement and migration. • The Application access for Tier 7 and 8 must be restricted to the database only • The Application access for Tier 7 and 8 must be restricted to the database only. • For Tier 4 and Tier 5 databases, the backup strategy must include the following o Transactional log backup every hour o Differential backup every day o Full back up every week • Backup strategies must be in place for all standalone Azure SQL databases using methods available with Azure SQL databases. • Tier 1 database must implement the following data masking logic o For Data type ipslabA – Mask 4 or less string data type characters o For Data type ipslabB – Expose the first letter and mask the domain o For Data type ipslabC – Mask everything except characters at the beginning and the end • All certificates and keys are internally managed in on-premises data stores • For Tier 2 databases, if there are any conflicts between the data transfer from on-premises, preference should be given to on-premises data. • Monitoring must be setup on every database • Applications with Tiers 6 through 8 must ensure that unexpected resource storage usage is immediately reported to IT data engineers. • Azure SQL Data warehouse would be used to gather data from multiple internal and external databases. • The Azure SQL Data warehouse must be optimized to use data from its cache • The below metrics must be available when it comes to the cache o Metric ipslabA – Low cache hit %, high cache usage % o Metric ipslabB – Low cache hit %, low cache usage % o Metric ipslabC – high cache hit %, high cache usage % • The reporting data for external partners must be stored in Azure storage. The data should be made available during regular business hours in connecting regions. • The reporting for Tier 9 needs to be moved to Event Hubs. • The reporting for Tier 10 needs to be moved to Azure Blobs. • The following issues have been identified in the setup • The External partners have control over the data formats, type and schemas • For External based clients, the queries cannot be changed or optimized. • The Database development staff are familiar with T-SQL language • Because if the size and amount of data, some applications, and reporting features are not performing at SLA levels. You have to implement logging for monitoring the data warehousing solution. Which of the following would you log?
You need to fulfill the below requirement of the case study.
“Applications with Tiers 6 through 8 must ensure that unexpected resource storage usage is immediately reported to IT data engineers.”
Which of the following would you implement for this requirement?
You have to access Azure Blob Storage from Azure Databricks using secrets stored in a key vault. You already have the storage account. the blob container and Azure key vault in place. You decide to implement the following steps. • Add the secret to the storage container. • Create a Databricks workspace and add the access keys. • Access the blob container from Azure Databricks. Would these steps fulfill the requirement?
You have to access Azure Blob Storage from Azure Databricks using secrets stored in a key vault. You already have the storage account. the blob container and Azure key vault in place. You decide to implement the following steps. • Add the secret to the key vault. • Create a Databricks workspace and add the secret scope. • Access the blob container from Azure Databricks. Would these steps fulfill the requirement?
You have to access Azure Blob Storage from Azure Databricks using secrets stored in a key vault. You already have the storage account. The blob container and Azure key vault in place. You decide to implement the following steps. • Add the secret to the key vault. • Create a Databricks workspace and add the access keys. • Access the blob container from Azure Databricks. Would these steps fulfill the requirement?
A company has created an Azure Data Lake Gen 2 storage account. They want to ingest data into the storage account from various data sources. Which of the following can they use to ingest data from a relational data store?
A company has created an Azure Data Lake Gen 2 storage account. They want to ingest data into the storage account from various data sources. Which of the following can they use to ingest data from a local workstation?
A company has created an Azure Data Lake Gen 2 storage account. They want to ingest data into the storage account from various data sources. Which of the following can they use to ingest data from log data stored on web servers?