I was looking for some testing framework for Erlang. I found several links but it looked a little bit outdated. I found a blog post that gather information about several different testing frameworks: http://bc.tech.coop/blog/070613.html. However there is nothing about latest Test Server and its front end Common Test.
Here I will present how to use it
Testing environment
My testing environment:
- OS: Ubuntu Intrepid
- Erlang: V5.6.3
- Test server: 3.2.2
- Common Test: 1.3.2
Test preparation
The easiest way of preparing test is generating skeleton in Emacs: Menu > Erlang > Skeletons > Erlang test suite CT front end. It should insert into current buffer following template:
%%%------------------------------------------------------------------- %%% File : test_SUITE.erl %%% Author : <godfryd@t40> %%% Description : %%% %%% Created : 3 Oct 2008 by <godfryd@t40> %%%------------------------------------------------------------------- -module(a).%% Note: This directive should only be used in test suites. -compile(export_all).
-include("ct.hrl").
%%-------------------------------------------------------------------- %% COMMON TEST CALLBACK FUNCTIONS %%-------------------------------------------------------------------- %%-------------------------------------------------------------------- %% Function: suite() -> Info %% %% Info = [tuple()] %% List of key/value pairs. %% %% Description: Returns list of tuples to set default properties %% for the suite. %% %% Note: The suite/0 function is only meant to be used to return %% default data values, not perform any other operations. %%-------------------------------------------------------------------- suite() -> [{timetrap,{minutes,10}}].
%%-------------------------------------------------------------------- %% Function: init_per_suite(Config0) -> %% Config1 | {skip,Reason} | {skip_and_save,Reason,Config1} %% %% Config0 = Config1 = [tuple()] %% A list of key/value pairs, holding the test case configuration. %% Reason = term() %% The reason for skipping the suite. %% %% Description: Initialization before the suite. %% %% Note: This function is free to add any key/value pairs to the Config %% variable, but should NOT alter/remove any existing entries. %%-------------------------------------------------------------------- init_per_suite(Config) -> Config.
%%-------------------------------------------------------------------- %% Function: end_per_suite(Config0) -> void() | {save_config,Config1} %% %% Config0 = Config1 = [tuple()] %% A list of key/value pairs, holding the test case configuration. %% %% Description: Cleanup after the suite. %%-------------------------------------------------------------------- end_per_suite(_Config) -> ok.
%%-------------------------------------------------------------------- %% Function: init_per_testcase(TestCase, Config0) -> %% Config1 | {skip,Reason} | {skip_and_save,Reason,Config1} %% %% TestCase = atom() %% Name of the test case that is about to run. %% Config0 = Config1 = [tuple()] %% A list of key/value pairs, holding the test case configuration. %% Reason = term() %% The reason for skipping the test case. %% %% Description: Initialization before each test case. %% %% Note: This function is free to add any key/value pairs to the Config %% variable, but should NOT alter/remove any existing entries. %%-------------------------------------------------------------------- init_per_testcase(_TestCase, Config) -> Config.
%%-------------------------------------------------------------------- %% Function: end_per_testcase(TestCase, Config0) -> %% void() | {save_config,Config1} %% %% TestCase = atom() %% Name of the test case that is finished. %% Config0 = Config1 = [tuple()] %% A list of key/value pairs, holding the test case configuration. %% %% Description: Cleanup after each test case. %%-------------------------------------------------------------------- end_per_testcase(_TestCase, _Config) -> ok.
%%-------------------------------------------------------------------- %% Function: sequences() -> Sequences %% %% Sequences = [{SeqName,TestCases}] %% SeqName = atom() %% Name of a sequence. %% TestCases = [atom()] %% List of test cases that are part of the sequence %% %% Description: Specifies test case sequences. %%-------------------------------------------------------------------- sequences() -> [].
%%-------------------------------------------------------------------- %% Function: all() -> TestCases | {skip,Reason} %% %% TestCases = [TestCase | {sequence,SeqName}] %% TestCase = atom() %% Name of a test case. %% SeqName = atom() %% Name of a test case sequence. %% Reason = term() %% The reason for skipping all test cases. %% %% Description: Returns the list of test cases that are to be executed. %%-------------------------------------------------------------------- all() -> [a_test_case].
%%-------------------------------------------------------------------- %% TEST CASES %%-------------------------------------------------------------------- %%-------------------------------------------------------------------- %% Function: TestCase() -> Info %% %% Info = [tuple()] %% List of key/value pairs. %% %% Description: Test case info function - returns list of tuples to set %% properties for the test case. %% %% Note: This function is only meant to be used to return a list of %% values, not perform any other operations. %%-------------------------------------------------------------------- a_test_case() -> [].
%%-------------------------------------------------------------------- %% Function: TestCase(Config0) -> %% ok | exit() | {skip,Reason} | {comment,Comment} | %% {save_config,Config1} | {skip_and_save,Reason,Config1} %% %% Config0 = Config1 = [tuple()] %% A list of key/value pairs, holding the test case configuration. %% Reason = term() %% The reason for skipping the test case. %% Comment = term() %% A comment about the test case that will be printed in the html log. %% %% Description: Test case function. (The name of it must be specified in %% the all/0 list for the test case to be executed). %%-------------------------------------------------------------------- a_test_case(Config) -> ok.
- with no arguments (it returns sub-cases, generally empty list []),
- with Config argument that performs test
When we have edited the skeleton it can be saved. Test cases grouped in one file form a suite. A file defining a suite has to be of name *_SUITE.erl. That way it can be automatically found by a test runner. In our case the file will have name test_SUITE.erl.
All suites have to be placed in one directory e.g. ./test. So now we have: ./test ./test/test_SUITE.erl
Running tests
We have a test suite with one empty test case. We can try to run it. Common Test provides test runner. It is called run_test and is located in /usr/lib/erlang/lib/common_test-1.3.2/priv/bin/ directory.
So to execute all test cases in all test suites in test directory we run a command:
$ run_test -dir test
It should return following output:
Erlang (BEAM) emulator version 5.6.3 [source][/source] [async-threads:0] [kernel-poll:false]Common Test starting (cwd is /home/godfryd/work/erl_test)
Eshell V5.6.3 (abort with ^G) (ct@t40)1> Common Test: Running make in test directories...
CWD set to: "/home/godfryd/work/erl_test/ct_run.ct@t40.2008-10-03_18.06.46"
TEST INFO: 1 test(s), 1 case(s) in 1 suite(s)
Testing work.erl_test: Starting test, 1 test cases Testing work.erl_test: TEST COMPLETE, 1 ok, 0 failed of 1 test cases
Tested code
Now lets add some code that will be tested. Put it into main.erl file:
-module(main).-export([add_one/1]).
add_one(X) -> X+1.
It contains one function add_one that adds 1 to its argument and returns the result.
Lets go compile it. Store compiled beam file into ebin directory: $ erlc -o ebin main.erl
So now we have: ./test ./test/test_SUITE.erl ./main.erl ./ebin ./ebin/main.beam
New test case
To test our function add_one we should and new test case into test_SUITE.erl file:
test_add_one() -> [].test_add_one(Config) -> 4 = main:add_one(3), ok.
and add it to suite in function all:
all() -> [a_test_case, test_add_one].
Final test execution
So now we have a function to be tested and one test case for it. Lets run tests again.
$ run_test -dir test
Erlang (BEAM) emulator version 5.6.3 [source][/source] [async-threads:0] [kernel-poll:false] Common Test starting (cwd is /home/godfryd/work/erl_test) Eshell V5.6.3 (abort with ^G) (ct@t40)1> Common Test: Running make in test directories... Recompile: test_SUITE ./test_SUITE.erl:160: Warning: variable 'Config' is unused ./test_SUITE.erl:166: Warning: variable 'Config' is unused CWD set to: "/home/godfryd/work/erl_test/ct_run.ct@t40.2008-10-03_18.10.25" TEST INFO: 1 test(s), 2 case(s) in 1 suite(s) Testing work.erl_test: Starting test, 2 test cases - - - - - - - - - - - - - - - - - - - - - - - - - - test_SUITE:test_add_one failed on line 167 Reason: undef - - - - - - - - - - - - - - - - - - - - - - - - - - Testing work.erl_test: *** FAILED *** test case 2 of 2 Testing work.erl_test: TEST COMPLETE, 1 ok, 1 failed of 2 test cases
It failed because module main could not be found. To solve this we have create ebin dir and stored there main.beam module. Now we can point to there providing ERL_LIBS environemtn variable:
$ export ERL_LIBS=pwd
Erlang runtime will search for modules in subdirectory ebin in all directories stored in ERL_LIBS.
So try again:
$ run_test -dir test
Results:
Erlang (BEAM) emulator version 5.6.3 [source][/source] [async-threads:0] [kernel-poll:false] Common Test starting (cwd is /home/godfryd/work/erl_test) Eshell V5.6.3 (abort with ^G) (ct@t40)1> Common Test: Running make in test directories... Recompile: test_SUITE ./test_SUITE.erl:160: Warning: variable 'Config' is unused ./test_SUITE.erl:166: Warning: variable 'Config' is unused CWD set to: "/home/godfryd/work/erl_test/ct_run.ct@t40.2008-10-03_18.15.42" TEST INFO: 1 test(s), 2 case(s) in 1 suite(s) Testing work.erl_test: Starting test, 2 test cases Testing work.erl_test: TEST COMPLETE, 2 ok, 0 failed of 2 test cases Updating /home/godfryd/work/erl_test/index.html... done Updating /home/godfryd/work/erl_test/all_runs.html... done
It passed! To see more details we can check index.html file. It stores latest results. From there you can look into logs of each test case and so on.