Testing

I think we need a testing concept soon. We’re getting closer to the 1.0 version. Most features are implemented and in my opinion we should now focus on bug fixing and improving the quality of the software. I want to fix bugs with higher priority and won’t continue developing the RMG.

We could maintain a test matrix with several test cases. A test case should describe what should be tested and how, so testers without programming skills can perform test cases. We could slowly increase the number of test cases from release to release. There’s no hurry.

The test matrix could be maintained by a google spreadsheet or a specialized web application if there’s any. If a developer changes a part of the software which is already tested, he should re-trigger related tests. There’s no need to re-execute all tests. In addition a developer shouldn’t test things which he developed, it should be tested by someone else. Perhaps we could also create debug/test builds more frequently if necessary.

The amount of work can be decreased with savegames which provide the test scenario. This requires that we finally make our savegames backwards compatible. I think this is important for version 1.0 anyway, so it’s better to start thinking about a concept now. Unit tests could also decrease the amount of work.

PS: … and I forgot to say, that I believe that VCMI has so much potential. There are still many people which are interested in Heroes 3, it’s unbelievable. Just look at HC forums, GoG.com, Google trends.

Doesn’t belong to here, but anyway. Look here comparison HotA/VCMI: google.com/trends/explore?q= … cmi&cmpt=q
H3-H6: google.com/trends/explore?q= … 206&cmpt=q

I’ll add my 5 cents: beegee, your work on RMG is unique. And I think RMG must be prepared in any form to 1.0 release.
If you would decide to continue work on RMG, it will be best for VCMI.
There are not so many bugs left - it’s because VCMI now can be played fully. I ended several maps already, and only one game was not finished due to crash on AI turn.
So bugs are not main problem, but functionality.

Just throwing some ideas:

  • Maybe try to formalize process of moving issues from “resolved” to “closed”? Allow this action to all registered users or some specific group of testers and ask them to recheck all resolved issues + any related areas once new build is out? Right now only Zamolxis does this occasionally.
    This won’t give us proper regression testing but better than nothing.

  • I remember an idea to use AI to run some specific sequence of actions in order to test something. Is this a viable option? Won’t help with GUI bugs but should detect mechanics/AI issues.

  • Unit testing may help only in small areas that are not tightly connected to the rest of engine. But right now I can’t remember any such areas that have tendency to break.

  • What about some static code analysis tool? Recent cppcheck detected some obvious bugs, clang have quite good built-in analyzer too. Any other options?

Wow. I just compared M&M to some other AAA-level titles on Google Trends. I expected results to be much worse than they are. Hope is not lost :slight_smile:

Won’t be 100% reliable - a lot of system are interconnected with each other in mysterious way. And this is what causes most hard to track bugs. My personal “favorite” here is one case where minor change in pathfinding led to broken construction of buildings in towns.

I’d wish it was true…
Right now I see 256 issues (unresolved, all categories but AI). Some of those are quite severe. Some of those are close to unfixable with current state of the code. And these are only issues that were not detected during development by us.

I think we can benefit from several things, static code analysis, unit testing(if possible, but difficult), AI testing(IMO some kind of unit testing) and test cases.

We could have a additional state for our mantis points. If something is developed it is first “in cvs”. Then a tester can check if everything works correctly and set the state to “resolved” or “tested”.

Perhaps it is at least 90% reliable, better than nothing. Really mysterious that pathfinding and buildings in towns are connected somehow:)

I will start setting up a testing concept and a test matrix in mid-december 2013. Let’s see how it will work.

@Macron1:
Sorry, but I think improving the quality is more important than providing missing features like RMG. In the original heroes 3 the RMG was also added later:)

I think I’ll see what can be done with tools like code analyzers - I have been playing with them from time to time.

From my point of view current bugtracker setup is OK:
Resolved = bugfix is already in vcs but not tested by reporter/tester.
Closed = tested and confirmed to be fixed.

Let’s hope that this is 90% or something around that. I’m not sure how easy is to create list of affected systems in case of quite interconnected H3 mechanics.
FYI:
build structure -> triggers fow update -> game updates path of current hero -> crash in pathfinder due to fact that selected object is not a hero.

I see, I didn’t know it. Ok, that’s fine :slight_smile:

We can try it, if it isn’t worth the effort, we leave it.

In case you’re interested in different analysis tools, I can add some more options. Right now we (you) have this list of things:

cppcheck
warnings from gcc (sadly, there are a lot of them)
clang analyzer

If you plan to have somewhat a continuous analysis and integration, we can register vcmi on coverity online scan. It’s free for open-source projects, we’ve already registered SuperTuxKart there and fixed some problems. It looks like this:
Screenshot
In case it’s not enough and somebody of you uses Visual Studio\Windows for development, you can ask PVS-Studio for free license for their product. Ivan and other people being able to read in Russian can know more about it on habrahabr.ru (there are a lot of articles about PVS-Studio). As far as I know they provide free licenses for open-source projects without any limitations. Actually this is what I want to see and once use by myself (have windows and legal access to most recent visual studio versions), but tbh vcmi’s code structure is somewhat fuzzy, also I haven’t seen any generated doxygen documentation. (point me if I’m wrong), that’s what stops me from digging into code.

Of course, it’s is not strongly connected with testing, but I suggest you discussing these things too. :wink:

cppcheck - I’ve went through its report. Even after filtering out such messages like i++ vs ++i it still produces quite a lot of noise and even obvious errors - hard to find anything useful.

warnings from gcc - currently we have most of “default set” enabled (-Wall -Wextra) already. I’ll check what else it has to offer.

clang analyzer - completely useless unless I missed something. Running “clang --analyze” on our code produced something like 10 null pointer errors. And none of those were real bugs.
On the other hand - normal compilation can detect possible bugs much better than gcc.

PVS-Studio - I know about it. Sounds interesting but Windows-only. They also keep English blog here - viva64.com/en/b/

One more interesting tool that I use from time to time is valgrind - great profiler & memory leaks detector

Yeah vcmi has this problem :frowning:
You can find some bits of documentation on wiki - wiki.vcmi.eu/index.php?title=Code_structure
Not much but better than nothing.

cppcheck has rules support so you won’t receive unneeded warnings, though I didn’t use it before.

Well, valgrind is (from what I heard) a great tool but requires higher level of understanding what happens.

If you’re up for this, I can register vcmi on coverity online scan, prepare and send source code for analysis, add everybody who’s related to vcmi’s development as admins so it won’t be only me who can see defects, also it’ll be good in case I disappear => anybody of you will be able to continue using coverity.

The reason why I want to do this is simple : I’m able to fix some bugs but looking at somebody else fixing defects will help to understand what’s right and what’s wrong.

re documentation: I saw some comments in source code that look like doxygen comments.

Yes, I know that. But even after disabling such unneeded warnings I still got quite a lot of noise. And I don’t want to push it further by writing rules to disable them for specific file or specific line - too much work for too small profit.

I’d rather wait for reply from Tow (our project leader) and beegee (since he started this).

Yeah, most documentation comments use /// so doxygen should make use of that. But this documentation is far from complete - at most you’ll get documented methods, in some cases - members are also documented. But you won’t get any high-level description of our system.

Ok.

Coverity is free and VCMI meets required points to be registered, so I think we can try it and compare results with cppcheck.

I think we should have one responsible which checks from time to time if cppcheck and/or other static code analysis tools detects any potential program errors, so we can fix bugs early. According to our coding guidelines one should compile with highest warning level and shouldn’t commit new warnings, so that’s ok.

Last posts are a bit much about static code analysis, I opened thread to discuss about testing. It’s important too, but if you want to talk in more detail about static code analysis we can have another thread.

Tow, can you estimate how much effort it may take to develop a system where the AI tests the gameplay and game mechanics? Is it useful? I think we therefore need to feed the AI with predictable data. It comes close to unit tests, doesn’t it? So do we really need the AI? Next question, have you an idea to provide savegames which are backwards compatible? Does it really requires to clutter our code with if/else version checks ?

Sure, if we can get useful diagnostics with no effort, then we should go for it.

Setting this up sohuldn’t be that hard. There’s one thing that should be possibly taken care of first — preparing a new AI interface, built on the current one. Current player interface works both for human and AI players. The interface has to be used in a very specific way, so naive AI code could very easily break things.
I would like to take all the parts of VCAI that does “bookkeeping” (counting queries, tracking if being engaged in battles, and so on) and helpers for the interface (like VCAI::requestActionASAP abomination) and refactor it out.

Without that any new adventure map AI would need to untrivially duplicate many parts of VCAI code.

That’s one issue. The second one is a non-deterministic game character. For example we could make AI that tests battles. We make it play a predefined battle (such mechanism is already done) and assert for all the mechanics details we can check. Eg. AI makes sure that game properly estimates damage range for all the unit pairs. Then it casts some spell, and checks if the all values are right.
However, as soon as AI takes some action (eg. attack) the RNG comes into play and results are unpredictable and hard to assert about. We could:

  • make the AI perform only deterministic actions (no attacks) until the issue is solved
  • make special flag that fixes RNG to always return some predefined values — then whole battle can be played fully deterministically
  • we can easily make AI testing easier by providing it with more flexible cheatcodes (eg. allowing to explicitly put/remove effect X on unit Y)

I believe such system can be set up much easier than comparable set of unit tests. Most of VCMI mechanics (ingame, battles) requires really massive environment. It is possible to set this up and then perform unit tests but it will take some effort and will increase maintenance cost.
If we use AI, then no special setup is needed — the normal game mechanisms are used (and tested by the way). Moreover, AI tests cover much broader area. Unit tests are meant for a small, isolated components. AI makes sure that all they work and are properly integrated at the same time. [On the other hand in case of failure we get much less specific info — but I don’t think it is a problem if the issue is reproducible and easily debugable.]

They can be provided right now, serialize method takes a savegame version as the second argument. (and CLoadFile constructor needs to be given an additional argument, the minimal supported version)
However, I would rather avoid this. Maintanance cost will raise quickly and mechanism will be bug-prone. Savegame compatibility is broken because the data VCMI stores changes. It is not always possible to fill the blanks when reading an old savegame. (or worse, filling them incorrectly could introduce subtle bugs)

The mechanism is rather meant for developers as a work-around to load old games when really needed. Still, I rather branch the code and revert the compatibility-breaking commits than play with compatibility. Well, unless the compatibility break is need exactly to fix the issue reported with savegame.

When we reach a point where game state structure is complete, solid and doesn’t need rewrites, then it can make sense to provide backwards compatibility. At the moment we are not there yet.

The request to remove my duplicates from Mantis…

I don’t know if I will do this in december and I’m somewhat constrained what refactoring VCMI belongs:) Thanks for the information, perhaps AI testing can be implemented without too much effort.

Then do it this way. You know VCMI engine best, I’ll trust you:)

EDIT:

Perhaps we don’t need savegames for our tests. We can have prepared maps which provide the testing scenario, so we can save time when re-testing a test case. Additionally we have cheat codes.

Well, for VCAI improvements you’ll have wait until at least I restore it to working state. Yesterday implemented clone pattern on already existing method chain pattern, however didn’t finish it yet.
Once done, however, it’ll allow to use Goals in object-oriented fashion, shorten code and let us process Goals in arbitrary order with smart pointers.
Overall, decoupling AI into interface, logic and what Tow called “bookkeeping” is necessary to tell what is actually going and and when are our tetsing points.

Scourge of version 0,94 are a gate of the underground world. (Departures occurring). It is impossible neither play, nor to test. The request to compile the version with the corrected error. (at least for testing)