ISTQB Syllabus Chapter 1 - Fundamentals of Testing

Đề cương và các tài liệu liên quan đến việc học và thi lấy chứng chỉ ISTQB
Forum rules
Các bạn chỉ được post các thông tin liên quan đến ISTQB
Post Reply
tvn
Admin
Posts: 4900
Joined: Tue 10 Aug, 2010 10:11 am
Location: HCM
Contact:

ISTQB Syllabus Chapter 1 - Fundamentals of Testing

Post by tvn »

ISTQB Syllabus Chapter 1 - Fundamentals of Testing
  • 1.1 Why is testing necessary?
    • bug, defect, error, failure, mistake, quality, risk, software, testing and exhaustive testing.
    1.2 What is testing?
    • code, debugging, requirement, test basis, test case, test objective
    1.3 Testing principles
    1.4 Fundamental test process
    • conformation testing, exit criteria, incident, regression testing, test condition, test coverage, test data, test execution, test log, test plan, test strategy, test summary report and testware.
    1.5 The psychology of testing
    • independence.
I) General testing principles

Principles
A number of testing principles have been suggested over the past 40 years and offer general guidelines common for all testing.
  • Principle 1 – Testing shows presence of defects
    Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.

    Principle 2 – Exhaustive testing is impossible
    Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts.

    Principle 3 – Early testing
    Testing activities should start as early as possible in the software or system development life cycle, and should be focused on defined objectives.

    Principle 4 – Defect clustering
    A small number of modules contain most of the defects discovered during pre-release testing, or are responsible for the most operational failures.

    Principle 5 – Pesticide paradox
    If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome this “pesticide paradox”, the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.

    Principle 6 – Testing is context dependent
    Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.

    Principle 7 – Absence-of-errors fallacy
    Finding and fixing defects does not help if the system built is unusable and does not fulfill the users’ needs and expectations.

II) Fundamental test process

1) Test planning and control
Test planning is the activity of verifying the mission of testing, defining the objectives of testing and the specification of test activities in order to meet the objectives and mission.
It involves taking actions necessary to meet the mission and objectives of the project. In order to control testing, it should be monitored throughout the project. Test planning takes into account the feedback from monitoring and control activities.

2) Test analysis and design
Test analysis and design is the activity where general testing objectives are transformed into tangible test conditions and test cases.
Test analysis and design has the following major tasks:
  • Reviewing the test basis (such as requirements, architecture, design, interfaces).
    Evaluating testability of the test basis and test objects.
    Identifying and prioritizing test conditions based on analysis of test items, the specification, behaviour and structure.
    Designing and prioritizing test cases.
    Identifying necessary test data to support the test conditions and test cases.
    Designing the test environment set-up and identifying any required infrastructure and tools.

3) Test implementation and execution
  • Developing, implementing and prioritizing test cases.
    Developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts.
    Creating test suites from the test procedures for efficient test execution.
    Verifying that the test environment has been set up correctly.
    Executing test procedures either manually or by using test execution tools, according to the planned sequence.
    Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and testware.
    Comparing actual results with expected results.
    Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g. a defect in the code, in specified test data, in the test document, or a mistake in the way the test was executed).
    Repeating test activities as a result of action taken for each discrepancy. For example, reexecution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and/or execution of tests in order to ensure that defects have not been introduced in unchanged areas of the software or that defect fixing did not uncover other defects (regression testing).

4) Evaluating exit criteria and reporting
  • Checking test logs against the exit criteria specified in test planning.
    Assessing if more tests are needed or if the exit criteria specified should be changed.
    Writing a test summary report for stakeholders.

5) Test closure activities
  • Checking which planned deliverables have been delivered, the closure of incident reports or raising of change records for any that remain open, and the documentation of the acceptance of the system.
    Finalizing and archiving testware, the test environment and the test infrastructure for later reuse.
    Handover of testware to the maintenance organization.
    Analyzing lessons learned for future releases and projects, and the improvement of test maturity.

III) The psychology of testing
  • Tests designed by the person(s) who wrote the software under test (low level of independence).
    Tests designed by another person(s) (e.g. from the development team).
    Tests designed by a person(s) from a different organizational group (e.g. an independent test team) or test specialists (e.g. usability or performance test specialists).
    Tests designed by a person(s) from a different organization or company (i.e. outsourcing or certification by an external body).
ISTQB Sample Exam chương I: Fundamentals of Testing (K2)

Knol.google.com



Post Reply

Return to “ISTQB Syllabus - Tài liệu học ISTQB material”