Module 2 - Performance Testing Sample Question - PART 2A

2.1 Typical Metrics Collected in Performance Testing

PTFL-2.1.1 (K2) Understand the typical metrics collected in performance testing

Q1: Accurate measurements and the metrics which are derived from those measurements are essential for defining the goals of performance testing and for evaluating the results of performance testing.

Select One Option

A:True

B: False

A:True

Q2: Performance testing should not be undertaken without first understanding which measurements and metrics are needed

Select One Option

A: True

B: False

A: TRUE

Q3: : The following project risks apply if this advice is ignored:

Select One Option

A: It is unknown if the levels of performance are acceptable to meet operational objectives

B: Testing will not be correct

C: Developer will not know what to do

D: The static and dynamic testing will not be complete

A: It is unknown if the levels of performance are acceptable to meet operational objectives.

Related Answer:

Accurate measurements and the metrics which are derived from those measurements are essential for defining the goals of performance testing and for evaluating the results of performance testing. Performance testing should not be undertaken without first understanding which measurements and metrics are needed. The following project risks apply if this advice is ignored:

A: It is unknown if the levels of performance are acceptable to meet operational objectives

B: The performance requirements are not defined in measurable terms

C: It may not be possible to identify trends that may predict lower levels of performance

D: The actual results of a performance test cannot be evaluated by comparing them to a baseline set of performance measures that define acceptable and/or unacceptable performance

E: Performance test results are evaluated based on the subjective opinion of one or more people

Q4: The following project risks apply if this advice is ignored:

Select One Option

A: Developer will not know what to do

B: Testing will not be correct

C: The performance requirements are not defined in measurable terms

D: The static and dynamic testing will not be complete

C: The performance requirements are not defined in measurable terms

Related Answer:

Accurate measurements and the metrics which are derived from those measurements are essential for defining the goals of performance testing and for evaluating the results of performance testing. Performance testing should not be undertaken without first understanding which measurements and metrics are needed. The following project risks apply if this advice is ignored:

A: It is unknown if the levels of performance are acceptable to meet operational objectives

B: The performance requirements are not defined in measurable terms

C: It may not be possible to identify trends that may predict lower levels of performance

D: The actual results of a performance test cannot be evaluated by comparing them to a baseline set of performance measures that define acceptable and/or unacceptable performance

E: Performance test results are evaluated based on the subjective opinion of one or more people

Q5: The following project risks apply if this advice is ignored:

A: Developer will not know what to do

B: Testing will not be correct

C: The static and dynamic testing will not be complete

D: It may not be possible to identify trends that may predict lower levels of performance

D: It may not be possible to identify trends that may predict lower levels of performance

Related Answer:

Accurate measurements and the metrics which are derived from those measurements are essential for defining the goals of performance testing and for evaluating the results of performance testing. Performance testing should not be undertaken without first understanding which measurements and metrics are needed. The following project risks apply if this advice is ignored:

A: It is unknown if the levels of performance are acceptable to meet operational objectives

B: The performance requirements are not defined in measurable terms

C: It may not be possible to identify trends that may predict lower levels of performance

D: The actual results of a performance test cannot be evaluated by comparing them to a baseline set of performance measures that define acceptable and/or unacceptable performance

E: Performance test results are evaluated based on the subjective opinion of one or more people

Q6: The following project risks apply if this advice is ignored:

A: The actual results of a performance test cannot be evaluated by comparing them to a baseline set of performance measures that define acceptable and/or unacceptable performance

B: Testing will not be correct

C: The static and dynamic testing will not be complete

D: Developer will not know what to do

A: The actual results of a performance test cannot be evaluated by comparing them to a baseline set of performance measures that define acceptable and/or unacceptable performance

Related Answer:

Accurate measurements and the metrics which are derived from those measurements are essential for defining the goals of performance testing and for evaluating the results of performance testing. Performance testing should not be undertaken without first understanding which measurements and metrics are needed. The following project risks apply if this advice is ignored:

A: It is unknown if the levels of performance are acceptable to meet operational objectives

B: The performance requirements are not defined in measurable terms

C: It may not be possible to identify trends that may predict lower levels of performance

D: The actual results of a performance test cannot be evaluated by comparing them to a baseline set of performance measures that define acceptable and/or unacceptable performance

E: Performance test results are evaluated based on the subjective opinion of one or more people

Q7: The following project risks apply if this advice is ignored:

A: The static and dynamic testing will not be complete

B: Testing will not be correct

C:  Performance test results are evaluated based on the subjective opinion of one or more people

D: Developer will not know what to do

C:  Performance test results are evaluated based on the subjective opinion of one or more people

Related Answer:

Accurate measurements and the metrics which are derived from those measurements are essential for defining the goals of performance testing and for evaluating the results of performance testing. Performance testing should not be undertaken without first understanding which measurements and metrics are needed. The following project risks apply if this advice is ignored:

A: It is unknown if the levels of performance are acceptable to meet operational objectives

B: The performance requirements are not defined in measurable terms

C: It may not be possible to identify trends that may predict lower levels of performance

D: The actual results of a performance test cannot be evaluated by comparing them to a baseline set of performance measures that define acceptable and/or unacceptable performance

E: Performance test results are evaluated based on the subjective opinion of one or more people

F: The results provided by a performance test tool are not understood

Q8: The following project risks apply if this advice is ignored:

A: The static and dynamic testing will not be complete

B:The results provided by a performance test tool are not understood

C: Testing will not be correct

D: Developer will not know what to do

B: The results provided by a performance test tool are not understood

Related Answer:

Accurate measurements and the metrics which are derived from those measurements are essential for defining the goals of performance testing and for evaluating the results of performance testing. Performance testing should not be undertaken without first understanding which measurements and metrics are needed. The following project risks apply if this advice is ignored:

A: It is unknown if the levels of performance are acceptable to meet operational objectives

B: The performance requirements are not defined in measurable terms

C: It may not be possible to identify trends that may predict lower levels of performance

D: The actual results of a performance test cannot be evaluated by comparing them to a baseline set of performance measures that define acceptable and/or unacceptable performance

E: Performance test results are evaluated based on the subjective opinion of one or more people

F: The results provided by a performance test tool are not understood

Q9: The metrics collected in a specific performance test will vary based on the

A: Test Analysis of system in test

B: Business Operation ( user, customer and stakeholders

C: Business context (business processes, customer and user behaviour, and stakeholder expectations),

D: Operational objective

C: Business context (business processes, customer and user behaviour, and stakeholder expectations),

The metrics collected in a specific performance test will vary based on the

1: Business context (business processes, customer and user behavior, and stakeholder expectations),

2: Operational context (technology and how it is used)

3: Test objectives

Q10: The metrics collected in a specific performance test will vary based on the

A: Test Analysis of system in test

B: Operational context (technology and how it is used)

C: Business Operation ( user, customer and stakeholders

D: Operational objective

B: Operational context (technology and how it is used)

The metrics collected in a specific performance test will vary based on the

1: Business context (business processes, customer and user behavior, and stakeholder expectations),

2: Operational context (technology and how it is used)

3: Test objectives

Q11: The metrics collected in a specific performance test will vary based on the

A: Test objectives

B: Test Analysis of system in test

C: Business Operation ( user, customer and stakeholders

D: Operational objective

The metrics collected in a specific performance test will vary based on the

1: Business context (business processes, customer and user behavior, and stakeholder expectations),

2: Operational context (technology and how it is used)

3: Test objectives

Q12: A common way to categorize performance measurements and metrics is to consider the following

Select all Options

i: Operational environment

ii: Development  Environment

iii: Technical Environment

iv: System environment,

v:  Business Environment

A: i, ii, iii

B: ii, iii, v

C: i, ii, iv

D: i, iii, v

D: i, iii, v

A common way to categorize performance measurements and metrics is to consider the Technical Environment, Business Environment, or Operational Environment in which the assessment of performance is needed.

The categories of measurements and metrics included below are the ones commonly obtained from performance testing.

Q13: Select from below performance metric based on Technical Environment

Select all options

i:  3D Technology

ii: Internet-of-Things (IoT)

iii: Mainframe

iv: Printing services

v:  Embedded

Answer:

A: i, ii, iii

B: ii, iii, v

C: i, ii, iv

D: i, iii, v

B: ii, iii, v

 

Performance metrics will vary by the type of the technical environment, as shown in the following list:

1: Web-based

2: Mobile

3: internet-of-Things (IoT)

4: Desktop client devices

5: Server-side processing

6: Mainframe

7: Databases

8: Networks

9: The nature of software running in the environment (e.g., embedded)

Q14: The Metrics for the Technical  Environment

Select all options

i: Resource utilization

ii: Alerts and warnings (e.g., the time needed for the system to issue an alert or warning)

iii: Numbers of errors impacting performance

iv: Scope of usage (e.g., percentage of global or national users conducting tasks at a given time)

v: Completion time (e.g., for creating, reading, updating, and deleting data)

A: i, ii, iii

B: ii, iii, v

C: i, iii, v

D: iii, iv, v

 

C: i, iii, v

The metrics include the following:

1: Response time (e.g., per transaction, per concurrent user, page load times).

2: Resource utilization (e.g., CPU, memory, network bandwidth, network latency, available disk space, I/O rate, idle and busy threads)

3: Throughput rate of key transaction (i.e., the number of transactions that can be processed in a given period of time)

4: Batch processing time (e.g., wait times, throughput times, data base response times, completion times)

5: Numbers of errors impacting performance

6: Completion time (e.g., for creating, reading, updating, and deleting data)

7: Background load on shared resources (especially in virtualized environments)

8: Software metrics (e.g., code complexity)

Q15: The Metrics for the Business Environment

Select all options

i: Business process efficiency (e.g., the speed of performing an overall business process including normal, alternate and exceptional use case flows)

ii: Scope of usage (e.g., percentage of global or national users conducting tasks at a given time)

iii: Concurrency of usage (e.g., the number of users concurrently performing a task)

iv Batch processing time (e.g., wait times, throughput times, data base response times, completion times)

v: Throughput of data, transactions, and other units of work performed (e.g., orders processed per hour, data rows added per minute)

A: i, ii, iii, v

B: ii, iii, iv, v

C: i, iii, iv, v

D i, ii, iii, iv 

A: i, ii, iii, v

Business Environment

From the business or functional perspective, performance metrics may include the following:

1: Business process efficiency (e.g., the speed of performing an overall business process including normal, alternate and exceptional use case flows)

2: Throughput of data, transactions, and other units of work performed (e.g., orders processed per hour, data rows added per minute)

3: Service Level Agreement (SLA) compliance or violation rates (e.g., SLA violations per unit of time)

4: Scope of usage (e.g., percentage of global or national users conducting tasks at a given time)

5: Concurrency of usage (e.g., the number of users concurrently performing a task)

6: Timing of usage (e.g., the number of orders processed during peak load times)

Q16 : The Metrics for the Operational Environment

Select all options

i: Business process efficiency (e.g., the speed of performing an overall business process including normal, alternate and exceptional use case flows)

ii: Operational processes (e.g., the time required for environment start-up, backups, shutdown

iii: Alerts and warnings (e.g., the time needed for the system to issue an alert or warning)

iv Batch processing time (e.g., wait times, throughput times, data base response times, completion times)

v: System restoration (e.g., the time required to restore data from a backup)

A: i, ii, iii

B: ii, iii, iv

C: ii, iii, v

D i, iii, iv

C: ii, iii, v

Operational Environment

The operational aspect of performance testing focuses on tasks that are generally not considered to be user-facing in nature. These include the following:

1: Operational processes (e.g., the time required for environment start-up, backups, shutdown and resumption times)

2: System restoration (e.g., the time required to restore data from a backup)

3: Alerts and warnings (e.g., the time needed for the system to issue an alert or warning)

Subscribe To Us

Don’t miss our future updates! Get Subscribed Today!

 MEEGSKILLS Copyright @2023