2.1 Software development models 1/14 1.Understand the relationship between development, test activities and work products in the development life cycle.

Презентация:



Advertisements
Похожие презентации
The waterfall model is a popular version of the systems development life cycle model for software engineering. Often considered the classic approach to.
Advertisements

The waterfall model is a popular version of the systems development life cycle model for software engineering. Often considered the classic approach to.
Loader Design Options Linkage Editors Dynamic Linking Bootstrap Loaders.
While its always a good idea to think outside the box when approaching a creative task, this is not always the case. For example, when working with teams,
Designing Network Management Services © 2004 Cisco Systems, Inc. All rights reserved. Designing the Network Management Architecture ARCH v
HPC Pipelining Parallelism is achieved by starting to execute one instruction before the previous one is finished. The simplest kind overlaps the execution.
In The Name Of Allah, Most Gracious And Most Merciful.
© 2002 IBM Corporation Confidential | Date | Other Information, if necessary © Wind River Systems, released under EPL 1.0. All logos are TM of their respective.
© 2009 Avaya Inc. All rights reserved.1 Chapter Two, Voic Pro Components Module Two – Actions, Variables & Conditions.
PERT/CPM PROJECT SCHEDULING Allocation of resources. Includes assigning the starting and completion dates to each part (or activity) in such a manner that.
Module 1, Part 1: Introduction and The VMP Slide 1 of 22 © WHO – EDM Validation Supplementary Training Modules on Good Manufacturing Practices.
Introducing Cisco Network Service Architectures © 2004 Cisco Systems, Inc. All rights reserved. Introducing the Network Design Methodology ARCH v
Mobility Control and one-X Mobile. Mobility Control User Configuration Mobile Call Control requires PRI-U, BRI or SIP (RFC2833) trunks in the IP Office.
Correlation. In statistics, dependence refers to any statistical relationship between two random variables or two sets of data. Correlation refers to.
© 2005 Cisco Systems, Inc. All rights reserved. BGP v Route Selection Using Policy Controls Using Multihomed BGP Networks.
© 2005 Cisco Systems, Inc. All rights reserved. BGP v Customer-to-Provider Connectivity with BGP Connecting a Multihomed Customer to Multiple Service.
What to expect? How to prepare? What to do? How to win and find a good job? BUSINESS ENGLISH COURSE NOVA KAKHOVKA GUMNASUIM 2012.
Management Information Systems An Introduction Management Information Systems An Introduction.
Lesson 15 Steps to Success for Cisco Network Security and VPN Solutions © 2005 Cisco Systems, Inc. All rights reserved. CSI v
© 2006 Cisco Systems, Inc. All rights reserved.BCMSN v Defining VLANs Correcting Common VLAN Configuration Errors.
Транксрипт:

2.1 Software development models 1/14 1.Understand the relationship between development, test activities and work products in the development life cycle and give examples based on project and product characteristics and context. (K2) 2.Recognize the fact that software development models must be adapted to the context of project and product characteristics. (Kl) 3.Recall reasons for different levels of testing and characteristics of good testing in any life cycle model. (Kl)

2.1 Software development models 2/14 Terms: Verification - Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. [ISO 9000] Validation - Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. [ISO 9000] Verification is concerned with evaluating a work product, component or system to determine whether it meets the requirements set. In fact, verification focuses on the question 'Is the deliverable built according to the specification?'. Validation is concerned with evaluating a work product, component or system to determine whether it meets the user needs and requirements. Validation focuses on the question 'Is the deliverable fit for purpose, e.g. does it provide a solution to the problem?'.

2.1.1 V-model 3/14 Terms: V-model: A framework to describe the software development life cycle activities from requirements specification to maintenance. The V- model illustrates how testing activities can be integrated into each phase of the software development life cycle. The waterfall model was one of the earliest models to be designed.

2.1.1 V-model 4/14 A common type of V-model uses four test levels. The four test levels used, each with their own objectives, are: component testing: searches for defects in and verifies the functioning of software components (e.g. modules, programs, objects, classes etc.) that are separately testable; integration testing: tests interfaces between components, interactions to different parts of a system such as an operating system, file system and hardware or interfaces between systems; system testing: concerned with the behavior of the whole system/product as defined by the scope of a development project or product. The main focus of system testing is verification against specified requirements; acceptance testing: validation testing with respect to user needs, requirements, and business processes conducted to determine whether or not to accept the system. test level: A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test. [After TMap] The various test levels are explained and discussed in detail in Section 2.2.

2.1.1 V-model 5/14 Note that the types of work products mentioned in Figure 2.2 on the left side of the V-model are just an illustration. In practice they come under many different names.

2.1.2 Iterative life cycles 6/14 Not all life cycles are sequential. There are also iterative or incremental life cycles where, instead of one large development time line from beginning to end, we cycle through a number of smaller self- contained life cycle phases for the same project. The increment produced by an iteration may be tested at several levels as part of its development. Subsequent increments will need testing for the new functionality, regression testing of the existing functionality, and integration testing of both new and existing parts. Regression testing is increasingly important on all iterations after the first one. Incremental development model: A development life cycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. In some (but not all) versions of this life cycle model, each subproject follows a mini V-model with its own design, coding and testing phases.

2.1.2 Iterative life cycles 7/14 Examples of iterative or incremental development models are prototyping,Rapid Application Development (RAD), Rational Unified Process (RUP) and agile development. Rapid Application Development is formally a parallel development of functions and subsequent integration.

2.1.2 Iterative life cycles 8/14 Agile development Extreme Programming (XP) is currently one of the most well-known agile development life cycle models. The methodology claims to be more human friendly than traditional development methods. Some characteristics of XP are: It promotes the generation of business stories to define the functionality. It demands an on-site customer for continual feedback and to define and carry out functional acceptance testing. It promotes pair programming and shared code ownership amongst the developers. It states that component test scripts shall be written before the code is written and that those tests should be automated. It states that integration and testing of the code shall happen several times a day. It states that we always implement the simplest solution to meet today's problems Testing within a life cycle model In summary, whichever life cycle model is being used, there are several characteristics of good testing: for every development activity there is a corresponding testing activity; each test level has test objectives specific to that level; the analysis and design of tests for a given test level should begin during the corresponding development activity; testers should be involved in reviewing documents as soon as drafts are available in the development cycle.

2.2 TEST LEVELS 9/14 component testing: The testing of individual software components. [After IEEE 610] Component testing Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable. stub: A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component. [After IEEE 610] driver: A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [After TMap ] robustness: The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions. [IEEE 610] robustness testing: Testing to determine the robustness of the software product. Component testing may be done in isolation from the rest of the system depending on the context of the development life cycle and the system.

2.2.1 Component testing 10/14 One approach in component testing, used in Extreme Programming (XP), is to prepare and automate test cases before coding. This is called a test-first approach or test-driven development. test driven development: A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.

2.2.2 Integration testing 11/14 integration testing: Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. big-bang testing: A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages. [After IEEE 610] Integration testing tests interfaces between components, interactions to different parts of a system such as an operating system, file system and hardware or interfaces between systems. Integration testing is often carried out by the integrator, but preferably by a specific integration tester or test team. The greater the scope of integration, the more difficult it becomes to isolate failures to a specific interface, which may lead to an increased risk. One extreme is that all components or systems are integrated simultaneously, after which everything is tested as a whole. This is called 'big- bang' integration testing. Another extreme is that all programs are integrated one by one, and a test is carried out after each step (incremental testing). Between these two extremes, there is a range of variants.

2.2.3 System testing 12/14 System testing is concerned with the behavior of the whole system/product as defined by the scope of a development project or product. It may include tests based on risks and/or requirements specification, business processes, use cases, or other high level descriptions of system behavior, interactions with the operating system, and system resources. Most often it is carried out by specialist testers that form a dedicated, and sometimes independent, test team within development, reporting to the development manager or project manager. In some organizations system testing is carried out by a third party team or by business analysts. functional requirement: A requirement that specifies a function that a component or system must perform. [IEEE 610] non-functional requirement: A requirement that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability and portability. test environment: An environment containing hardware, instrumentation, simulators,software tools, and other support elements needed to conduct a test. [After IEEE 610] System testing requires a controlled test environment with regard to,amongst other things, control of the software versions, testware and the test data (see Chapter 5 for more on configuration management). A system test is executed by the development organization in a (properly controlled) environment. The test environment should correspond to the final target or production environment as much as possible in order to minimize the risk of environmentspecific failures not being found by testing.

2.2.4 Acceptance testing 13/14 When the development organization has performed its system test and has corrected all or most defects, the system will be delivered to the user or customer for acceptance testing. The acceptance test should answer questions such as: 'Can the system be released?', 'What, if any, are the outstanding (business) risks?' and 'Has development met their obligations?'. The goal of acceptance testing is to establish confidence in the system, part of the system or specific non-functional characteristics, e.g. usability, of the system. It assesses the system's readiness for deployment and use, it is not necessarily the final level of testing. Acceptance testing may occur at more than just a single level, for example: A Commercial Off The Shelf (COTS) software product may be acceptance tested when it is installed or integrated. Acceptance testing of the usability of a component may be done during component testing. Acceptance testing of a new functional enhancement may come before system testing. acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. [After IEEE 610]

2.2.4 Acceptance testing 14/14 operational acceptance testing: Operational testing in the acceptance test phase, typically performed in a simulated real-life operational environment by operator and/or administrator focusing on operational aspects, e.g. recoverability, resource- behavior, installability and technical compliance Within the acceptance test for a business-supporting system, two main test types can be distinguished; as a result of their special character, they are usually prepared and executed separately. Other types of acceptance testing that exist are contract acceptance testing and compliance acceptance testing. Contract acceptance testing is performed against a contract's acceptance criteria for producing custom-developed software. alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing. beta testing: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off- the-shelf software in order to acquire feedback from the market.