Logging

If could get initializers list and variadic templates just by killing VC10, I would have done so long time ago. :wink: Unfortunately, from the features you listed, VC11 support only strongly typed enums and range-based for. The other ones were released in so called “November '12 CTP of the C++ compiler” that is alpha-quality and practically useless.
I had hoped that these C++11 features would be released as part of recent Update 2 but that did not happen. I guess we’ll have to wait at least till the Build 2013 conference (26–28 Jun) for further updates. That would be consistent with their promise that “next batch of C++11 features would be coming in the first half of 2013” ( blogs.msdn.com/b/vcblog/archive/ … x#10369778 ).

XP support is not an issue. AFAIK there are no developers using Windows XP and Visual Studio. If there were, they can always switch to MinGW. AVS uses that configuration if I remember correctly.

Catching stacktrace is not always reliable in optimized builds. Even when it works, in order to get the names of symbols and source file references you need the debug symbols. They are big and not distributed with VCMI binaries.
The macro seems much more feasible to me. It can create a custom exception class object as well and reliably give its constructor the information, where the throwing is taking place. (stacktrace are impossible though :frowning: )

Why do we don’t want to print line number, etc… of throwing place when logging system fails to initialize? Such failure seems to me like an abnormal situation that we care what caused it.

I’d say that every situation when we throw an exception should be by default treated as error and we should log as much information about it as possible. It is better to log the source code line for a non-critical error, than not to do it for a truly critical one.

I am aware that there are places when exceptions are used to steer the codeflow (not to signal an unexpected problem) and there logging the location may be pointless. If an exception throw is an expected execution path, then such cases can be handled individually. I am not saying to forcibly use the hypothetic throwing macro for all throws in the code.

Sorry, I don’t quite understand this point. Why should they decide? Logger is just logging.
Writing location should be decided either at the throwing place or by the catch block.

Well, a custom “throw” macro seems like a way to have more accurate logs without imposing a cost on writing/maintaining the code. I wanted to do that for a quite while, just there was no opportunity. :wink:

I’m also going to rethink the automatic indent level. It would make logs more readable, especially with that function entry/leave style. It could work as a global, thread-local state. Still I need to experiment with the idea a little, I’ll write how I see that later.

I would agree with 4.6 as minimum if this will give us bigger c++11 support (like features from VC11 update). Right now I don’t see any need for version bump - from 4.6 features vc10 supports only range-based for and vc11 gives forward enums

Along with that we could implement our own stacktrace in regard to the unreliability of built-in stacktraces in debug builds . We could have a stacktrace class:

struct CStackTrace
{
	CStackTrace * prev;
	std::string fileName;
	std::string funcName;
};

An object of it is created with the macro LOG_TRACE or sth. similar in the stack. We need a global, thread-local storage which can be used for a automatic indent level too. In this storage the macro saves a pointer to the currently created CStackTrace object. In the d-tor of the object, the pointer in the thread-local storage is assigned the value of the prev CStackTrace. With this method you can build sth. like a linked list of CStackTrace objects. We can add a count function to determine the indent level(or add a counter variable). A custom exception object can collect this information on construction to build a stacktrace message. The throw and the exception can be wrapped into a macro as well to support showing the line number where the exception occured. Keep in mind, you have not to use the macro LOG_TRACE in performance-intensive operations or wrapper functions.
I would apply the automatic indent level either to all logs or to no logs. The indent level should be part of the CLogFormatter which is customizable as well. So we can a different log layout for file or console output. The indent could be a tab, but in my opinion a single * or a single .(dot) would be ok.
Tow, I hope I could give and share you some ideas. Feel free to implement it!

That’s right. Hopefully we have not to wait till next VS release(2015) :slight_smile:

In regard to the revision 3335 sourceforge.net/apps/trac/vcmi/changeset/3335 I have to say two things. First of all, thanks to stopiccot for fixing the bug:) It was definitely difficult to find. Strangely enough it doesn’t caused a bug when compiling with GCC or VC. Luckily in the same compilation unit the static initialization order is defined. So it should be fine now.
The second point is that normally objects with relationships to other objects should be initialized after execution entered the main function or in a lazy init function(which will be called after the main function). The lazy init function approach requires for each function a separate static mutex to be thread-safe. If you don’t use this approach, you have to be sure that you haven’t forgotten to initialize logger objects. I think it’s best to keep things like they are now.

I did two more changes:

  1. Set default level of logging to the trace. VCMI builds are mainly meant for testers and they should gather as much info as possible.
  2. Fixed an issue with tracing macros. They were like
if(tracing_enabled)
{ RAII_tracer object(params); }

That caused guardian to be destructed immediately after macro’s “if”, not at the end of enclosing scope. I used unique_ptr to represent a guardian object that may be constructed or not. I’m not sure if it is the best approach. I considered boost::optional but encountered some issues with object copying causing additional logs. Can be done, though I found that not effort-worthy.

I like the idea. If we stack Trace objects, they naturally create a stacktrace, so we should take advantage of it.

I fully agree.

Thanks! However, likely I won’t touch that soon, too much other work. :frowning:

That’s ok and thanks for the bug-fix!

I still use it and not going to abandon 2010 until next version is fully functional :sunglasses:

I really would like to restore old AI log style with teal colour. However, no matter what i try, it remains white. Any insight on this?

	"logging" : {
		"console" : {
			"colorMapping" : 
				{
					"color" : "magenta",
					"domain" : "network",
					"level" : "trace"
				},
				{
					"color" : "teal",
					"domain" : "ai",
					"level" : "trace"
				}
			],
			"threshold" : "debug"
		}
	}

Also, the old way with indents was much more readable. But that’s another issue.

During AI profiling with GUI and FoW open I suprisingly discovered that destructor

CLoggerStream::~CLoggerStream

Takes almost 25% of execution time.

:open_mouth:

Most of it is call of CLogger::format including std::locale.

For AI running without GUI, the logs take over 50% of all execution time.

I believe some optimization here could be nice.

Maybe it will be good to completely turn off logs. And have two exe-files - one for those who want to help debugging, and second - to just play.

More work slowly if enabled module “Adventure AI trace”.
And yes, interesting looking vcmi.exe no logs.

For Ivan - users.livejournal.com/_winnie/380215.html
I’m a bit lazy to google for english article about this, but in general I read several times that bare printf can be faster than iostream. With some tweaks though iostream can be practically the same speed as printf.

In SuperTuxKart a wrapper for printf is used. For more info you can have a look here, log.cpp/hpp: github.com/supertuxkart/stk-cod … /src/utils

Uhm, not. The logs are not slowing down when they are printed out, but when they are generated internally. Neither console or hard disk have any impact on actual speed.

Not to mention there are tons of logs from entire engine and AI is only small fraction of them. Logs do not slow down AI only, they slow whole engine down!

Were you checking the Debug or Release build?
Logging should not take that much time… unless we’re logging REALLY much.

What part of time is spend on the creating and setting locale? Perhaps we could just manually format the time.

Another way would be making logger asynchronous but that brings its own problems (eg. engine can crash and close without outputing all the logs).

Release (RD), of course.

80-90% of the above. Rest of it is caused by boost::replace_all, we could probably use replace_first instead since log messages are trivial.

Still, I’m waiting for beegee to comment this situation.

With latest trunk (just updated), things look like that:

GUI on, watching 2 AI players on XL map. Actually logging in total has higher CPU usage than the AI in total (>16%).


Need details about CLogger::callTargets as it consume 22.7%

Here you go, here’s part of call hierarchy

It seems that most expensive functions belong to MSVCP110 library :-/


What if f.e. cache dateStream inside CLogFormatter instance. Clear on each call but don`t reconstruct. This should prevent locale construction.

Thanks for the detailed profiliing analysis. I improved logging performance a bit. Stingstream with locale/facet will be reused for logging operations. Additionally replace_first will be used instead of replace_all.

Warmonger, can you check if performance improved a little bit? According to your info most time is spent on logging to file (formatting that log message). Log file grows up to around 11 000 lines for just starting one map and one turn ingame with 2 AI players. IMO this isn’t that much, but if you play a few weeks ingame this can be too much.