Thursday, 10 December 2015

Unix: Tutorial One

Tutorial One:

1. Listing all the files:

When you first login, your current working directory is your home directory.

To find out what is in your home directory, type:

ls

(list down all the contents/files at working directory)

ls -a

(list down all Hidden files or dot[.] files)


2. Making Directories:
For making directory/folders under home directory use below command:

mkdir filename

to see the directory you have created, tye:

ls

It will show your created directory/folder

3. Change Directory:
To change the current directory/folder and go to different/into created directory/folder, type:

cd filename

It will go into the directory

4. The . and .. directory:

cd .

You will stay where you are (the same directory)

cd ..

It will take you one directory up (back to home directory)

5. Pathnames:

To know where you are currently in file system, type:

pwd

It means 'print working directory). It will show you the full pathname

6. ~ (tilde) character:

It can be used to specify paths starting to your home directory so typing:

ls ~/homeDirectory

It will list down all the contents of directory (no matter where you are in file system)


Command
    Meaning
ls
list files and directories
ls -a
list all files and directories
mkdir
make a directory
cd directory
change to named directory
cd
change to home-directory
cd ~
change to home-directory
cd ..
change to parent directory
pwd
display the path of the current directory

Wednesday, 10 July 2013

Software Testing Life Cycle

Software Testing Life Cycle Graphical Representation:



(A picture is more meaningful than thousand words)

Wednesday, 6 February 2013

Classic Testing Mistakes




The role of testing
  • Thinking the testing team is responsible for assuring quality.
  • Thinking that the purpose of testing is to find bugs.
  • Not finding the important bugs.
  • Not reporting usability problems.
  • No focus on an estimate of quality (and on the quality of that estimate).
  • Reporting bug data without putting it into context.
  • Starting testing too late (bug detection, not bug reduction)
Planning the complete testing effort
  • A testing effort biased toward functional testing.
  • Under-emphasizing configuration testing.
  • Putting stress and load testing off to the last minute.
  • Not testing the documentation
  • Not testing installation procedures.
  • An over-reliance on beta testing.
  • Finishing one testing task before moving on to the next.
  • Failing to correctly identify risky areas.
  • Sticking stubbornly to the test plan.
Personnel issues
  • Using testing as a transitional job for new programmers.
  • Recruiting testers from the ranks of failed programmers.
  • Testers are not domain experts.
  • Not seeking candidates from the customer service staff or technical writing staff.
  • Insisting that testers be able to program.
  • A testing team that lacks diversity.
  • A physical separation between developers and testers.
  • Believing that programmers can't test their own code.
  • Programmers are neither trained nor motivated to test.
The tester at work
  • Paying more attention to running tests than to designing them.
  • Unreviewed test designs.
  • Being too specific about test inputs and procedures.
  • Not noticing and exploring "irrelevant" oddities.
  • Checking that the product does what it's supposed to do, but not that it doesn't do what it isn't supposed to do.
  • Test suites that are understandable only by their owners.
  • Testing only through the user-visible interface.
  • Poor bug reporting.
  • Adding only regression tests when bugs are found.
  • Failing to take notes for the next testing effort.
Test automation
  • Attempting to automate all tests.
  • Expecting to rerun manual tests.
  • Using GUI capture/replay tools to reduce test creation cost.
  • Expecting regression tests to find a high proportion of new bugs.
Code coverage
  • Embracing code coverage with the devotion that only simple numbers can inspire.
  • Removing tests from a regression test suite just because they don't add coverage.
  • Using coverage as a performance goal for testers.
  • Abandoning coverage entirely.
Source: http://www.mobileqazone.com/

Tuesday, 8 January 2013

Installation of Android App in Emulator


Installation of any App in Emulator:

Installing App in Emulators need the .apk file which we can find in the same project folder.
Path of the .apk file: workspace> project> bin\
There are different ways of installing Android App (.apk file) in emulators as mentioned below:

Way 1: If we are using Eclipse version 3.7.2, then on executing the Android project in Eclipse (without throwing any error) will automatically install the App (.apk file) in emulator (make sure the Emulator is open while executing the Eclipse Android project).

Way 2: We can also install the Android App (.apk file) through ‘adb’ (Android Debug Bridge).
adb is a versatile command line tool that lets you communicate with an emulator instance or connected Android-powered device.

Path of adb: Drive\android-sdk-windows\platform-tools

‘adb’ used for installing any App in Android. As per mentioned command in the screenshot, is used for installing any app.

Installation Command is:

adb install demo.apk

[adb( command) install (command) demo.apk (path of.apk file of  demo android project)]
have a look on screenshot for reference:



Note: Emulator should be already opened while installing any App into Emulator

Thursday, 20 December 2012

Windows Phone Test Checklist

During mobile application exploring, i got the idea of maintaining the checklist of Android and Window based application. I am maintaining the checklist here, i will update this list once i get the more scenarios.

A. Verify Application Tile Images : 
1.View the Application list.
2.Verify that the small mobile app tile image is representative of the application.
3.From the Application list, tap and hold the small mobile app tile of your application and select 'pin to start'.
4.Verify that the large mobile tile image on the Start screen is representative of the application.

B. Application Closure:
1.Launch your application.
2.Navigate throughout the application, and then close the application through device's "back" button.

C.Application Responsiveness:
1.Launch your application.
2.Thoroughly test the application features and functionality.
3.Verify that the application does not become unresponsive for more than three seconds.
4.Verify that a progress indicator is displayed if the application performs an operation that causes the device to appear to be unresponsive for more than three seconds.
5.If a progress indicator is displayed, verify that the application provides the user with an option to cancel the operation being performed.

D.Application Responsiveness After Being Closed:
1.Launch your application.
2.Close the application using the Back button, or by selecting the Exit function from the application menu.
3.Launch your application again.
4.Verify that the application launches normally within 5 seconds, and is responsive within 20 seconds of launching.

E. Application Responsiveness After Being Deactivated:
1.Launch your application.
2.De-activate the app. This can be achived by pressing the "Start" button or by launching another app. (By deactivation we are not closing the app's process but are merely putting the app in the background.)
3.Verify that the application launches normally within 5 seconds, and is responsive within 20 seconds of launching.
4.If your application includes pause functionality, pause the application.
5.Launch your application again.
6.Verify that the application launches normally within 5 seconds, and is responsive within 20 seconds of launching.

F. Back Button: Previous Pages:
1.Launch your application.
2.Navigate through the application.
3.Press the Back button.
4.Verify that the application closes the screen that is in focus and returns you to a previous page within the back stack.

G. Back Button: First Screen:
1.Launch your application.
2.Press the Back button.
3.Verify that either the application closes without error, or allows the user to confirm closing the application with a menu or dialog.

H. Back Button: Context Menus and Dialog:
1.Launch your application.
2.Navigate through the application.
3.Display a context menu or dialogs.
4.Tap the Back button.
5.Verify that the context menu or dialog closes and returns you to the screen where the context menu or dialog was opened.

I. Back Button: Games:
1.Launch your application.
2.Begin playing the game.
3.Tap the Back button.
4.Verify that the game pauses.

J. Trial Applications:
1.Launch the trial version of your application.
2.Launch the full version of your application.
3.Compare the performance of the trial and full versions of your application.
4.Verify that the performance of the trial version of your application meets the performance requirements mentioned in test cases 1-9

K. Verify that Application doesn't affect Phone Calls:
1.Ensure that the phone has a valid cellular connection.
2.Launch your application. Receive an incoming phone call.
3.Verify that the quality of the phone call is not negatively impacted by sounds or vibrations in your application.
4.End the phone call.
5.Verify that the application returns to the foreground and resumes.
6.De-activate the application by tapping the Start button.
7.Verify that you can successfully place a phone call.

L. Verify that Application doesn't affect SMS and MMS Messaging:
1.Ensure that the phone has a valid cellular connection.
2.Ensure that the phone is not in Airplane mode by viewing the phone Settings page.
3.Launch your application. Deactivate the application by tapping the Start button.
4.Verify that a SMS or MMS message can be sent to another phone.
5.Verify that notifications regarding the SMS or MMS messages are displayed on the phone either from within the application, or within 5 seconds after the application is closed.

M. Verify Application Responsiveness With Incoming Phone Calls and Messages:
1.Ensure that the phone has a valid cellular connection.
2.Ensure that the phone is not in Airplane mode by viewing the phone Settings page.
3.Receive an incoming phone call, SMS message or MMS message.
4.Verify that the application does not stop responding or close unexpectedly when the notification is received.
5.After verifying the above step, tap on the message notification or receive the incoming phone call.
6.If a message was received, verify that User can return to the application by pressing the Back button.

N. Language Validation:
1.Review the product description of the application and verify that it is localized to the target language.
2.Launch your application.
3.Verify that the UI text of the application is localized to the target language.

Please leave your comment so that i can refine it.

Wednesday, 12 December 2012

Bug Life Cycle


Bug can be defined as the abnormal behavior of the software. No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software. A bug is a specific concern about the quality of the Application under Test (AUT).

Bug Life Cycle:
In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:

The different states of a bug can be summarized as follows:



1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed

Description of Various Stages:

1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.

2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.

3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.

4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.

5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.

6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.

7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.

8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.

9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.

10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.
Guidelines on deciding the Severity of Bug:

Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as the basis for assigning priority of work on defects.

A sample guideline for assignment of Priority Levels during the product test phase includes:

1. Critical / Show Stopper — An item that prevents further testing of the product or function under test can be classified as Critical Bug. No workaround is possible for such bugs. Examples of this include a missing menu option or security permission required to access a function under test.

2. Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs. Examples of this include inaccurate calculations; the wrong field being updated, etc.

3. Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples include matching visual and text links which lead to different end points.
.
4. Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs.

Thursday, 29 November 2012

API and API Testing


What is API?
An API (Application Programming Interface) is a collection of software functions and procedures, called API calls that can be executed by other software applications.

What is API Testing?
API testing is mostly used for the system which has collection of API that needs to be tested. The system could be system software, application software or libraries.API testing is different from other testing types as GUI is rarely involved in API Testing. Even if GUI is not involved in API testing, you still need to setup initial environment, invoke API with required set of parameters and then finally analyze the result. Setting initial environment become complex because GUI is not involved. In case of API, you need to have some way to make sure that system is ready for testing. This can be divided further in test environment setup and application setup. Things like database should be configured, server should be started are related to test environment setup. On the other hand object should be created before calling non static member of the class falls under application specific setup. Initial condition in API testing also involves creating conditions under which API will be called. Probably, API can be called directly or it can be called because of some event or in response of some exception.

Test Cases for API Testing:
The test cases on API testing are based on the output.

Return value based on input condition
Relatively simple to test as input can be defined and results can be validated. Example: It is very easy to write test cases for int add(int a, int b) kind of API. You can pass different combinations of int a and int b and can validate these against known results.

Does not return anything
Behavior of API on the system to be checked when there is no return value.
Example: A test case to delete(ListElement) function will probably require to validate size of the list or absence of list element in the list.

Trigger some other API/event/interrupt
 The output of an API if triggers some event or raises some interrupt, then those events and interrupt listeners should be tracked. The test suite should call appropriate API and declarations should be on the interrupts and listener.

Update data structure
 This category is also similar to the API category which does not return anything. Updating data structure will have some effect on the system and that should be validated.

Modify certain resources
 If API call is modifies some resources, for example makes update on some database, changes registry, kills some processes etc, then it should be validated by accessing the respective resources.


API Testing vs. Unit Testing: What’s the difference?
1. API testing is not Unit testing. Unit testing is owned by development team and API by QE team.API is mostly black box testing where as unit testing is essentially white box testing.

2. Both API-testing and unit-testing target the code-level , hence similar tools can be used for both activities. There are several open source tools available for API testing and a few of them are Webinject, Junit, XMLUNIT, HttpUnit, ANT etc.

3. API testing process involves testing the methods of .NET, JAVA, J2EE APIs for any valid, invalid, and inappropriate inputs, and also testing the APIs on Application servers.

4. Unit testing activity is owned by the development team; and the developers are expected to build Unit tests for each of their code modules (these are typically classes, functions, stored procedures, or some other ‘atomic’ unit of code), and to ensure that each module passes its unit tests before the code is included in a build. API testing, on the other hand, is owned by the QE team, a staff other than the author of the code. API tests are often run after the build is ready, and it is common that the authors of the tests do not have access to the source code; they essentially create black box tests against an API rather than the traditional GUI.

5. Another key difference between API and Unit testing lies in the Test Case design. Unit tests are typically designed to verify that each unit in isolation performs as it should. The scope of unit testing often does not consider the system-level interactions of the various units. Whereas, API testing, are designed to consider the ‘full’ functionality of the system, as it will be used by the end users. This means that API tests must be far more extensive than unit tests, and take intoconsideration the sorts of ‘scenarios’ that the API will be used for, which typically involveinteractions between several different modules within the application.

API Testing Approach
An approach to test the Product that contains an API.

Step I:
Understand that API Testing is a testing activity that requires some coding and is usually beyond the scope of what developers are expected to do. Testing team should own this activity.

Step II:
Traditional testing techniques such as equivalence classes and boundary analysis are also applicable to API Testing, so even if you are not too comfortable with coding, you can still design good API tests.

Step III:
It is almost impossible to test all possible scenarios that are possible to use with your API. Hence, focus on the most likely scenarios, and also apply techniques like Soap Opera Testing and Forced Error Testing using different data types and size to maximize the test coverage. Main Challenges of API Testing can be divided into following categories.
• Parameter Selection
• Parameter combination
• Call sequencing

API Framework
The framework is more or less self-explanatory. The purpose of the config file is to hold all the configurable components and their values for a particular test run. As a follow through, the automated test cases should be represented in a ‘parse-able’ format in the config file. The script should be highly ‘configurable’. In the case of API Testing, it is not necessary to test every API in every test run ( the number of API’s that are tested will lessen as testing progresses). Hence the config file should have sections which detail which all API’s are “activated” for the particular run. Based on this, the test cases should be picked up.

Since inserting the automation test case parameters into config file can be a tedious activity, it should be designed in such a way that the test case can be left static with a mechanism of  ‘activating’ and ‘deactivating’ them.




Definitions:

Soap Opera Testing:
Soap opera tests exaggerate and complicate scenarios in the way that television soap operas exaggerate and complicate real life.

Forced Error Testing:
Forced error testing is nothing but mutation testing. It is process of inducing error /changes to the application to find how application is working. The forced-error test (FET) consists of negative test cases that are designed to force a program into error conditions. A list of all error messages that the program issues should be generated. The list is used as a baseline for developing test cases.

Software Functions and Procedures:
Functions and procedures are the foundations of programming. They provide the structure to organize the program into logical units that can manage the various activities needed for a program.

Functions
There are two basic types of functions:

Built-in
—these are built into the programming environment and do things such as opening and closing files, printing, writing, and converting variables (e.g., text to numbers, singles to integers, etc.).

Application/user-specific
—depending on what the program needs, you can build functions and procedures using built-in functions and procedures and variables.

Procedures
Procedures are used to perform a series of tasks. They usually include other procedures and functions within the program. Procedures typically do not return a value; they are simply executed and return control to the calling procedure or subroutine. Procedures in Visual Basic are called "Subroutines," often "Sub" for short. In JavaScript, "Functions" are used as procedures (they simply return no or null values to whatever called them).

Source: www.scribd.com/doc/9808382/Introduction-to-API-Testing