by Michael S. Kaplan, published on 2006/04/24 04:01 -04:00, original URI: http://blogs.msdn.com/b/michkap/archive/2006/04/24/581700.aspx
It is easy to take cheap shots at documentation writers for their mistakes.
But it is usually placing the blame in the wrong place, since in many cases it is the actual functionality that was confusing first. So if the writer's attempts to clarify that which is hard to explain fall short then it falls upon the PM/developer/tester of the functionality to step up and help with that....
(In fact this week we are doing a doc. review for Vista to try and help with this!)
Anyway, way back near the beginning of this whole blogging adventure for me, I posted about API Consistency and Developer Comfort. In it I talked about some of those consistencies between small groups of functions within the Win32 API.
It is a comfort thing, as I said. But sometimes it can be a burden.
Let's take return values for example. In most of the NLS API that does any kind of conversion, transformation, or retrieval, the pattern is simple:
It is simple, sure. And pretty consistently applied across most of the functions.
Of course one problem with this pattern is that you have to call it at least twice to get the size to use, or else risk allocating too much. Another is that there is a performance hit to getting the exact size. And yet another is that to not touch the destination buffer, the function may have to allocate.
So, when Shawn did the dev work on the NormalizeString function, a different pattern was used:
On success, the function returns the length of the normalized string in the destination buffer.
If the destination buffer is NULL or if cwDstLength is zero, the return value is the estimated buffer length required to do the actual conversion.
If the string in the input buffer is null-terminated or if cwSrcLength is -1, then the string written to the destination buffer will be null-terminated and the returned count of characters will include the terminating null character.
If a problem occurs, the function return will be less than or equal to zero. The application should call GetLastError, which will return one of the following values:
ERROR_SUCCESS No error; this occurs when the actual size of the output string is zero. ERROR_INSUFFICIENT_BUFFER Need a bigger destination buffer. The return value is the negative of a better estimated guess of the required length. Try the conversion again with a buffer of -(Return Value) size. ERROR_INVALID_PARAMETER Input pointers were incorrect or normalization form was incorrect. ERROR_NO_UNICODE_TRANSLATION Invalid Unicode was found in string. The return value is the negative of the index of the location of the error in the input string. ERROR_BADDB The configuration registry database is corrupt.
There is inded a lot that is different in there, and you will probably notice that if pretty aggressively tries to deal with the three problems I mentioned. In fact, there are really only two real problems with it that I can see:
Now if anyone decides that the documentation for this function is confusing, I think it is pretty obvious that it has more to do with the underlying functionality than the actual SDK topic. :-)
Everything up until now has been an introduction to a completely different example of overloading meanings that is much, much worse!
Let's talk about the LOGFONT structure.
It's most important characteristic (for our purposes) is that it is not an actual font. It is a descripion of characteristics of either an actual font (e.g. if it is returned by the GetObject function when it is passed an HFONT) or what a developer might want from a font (e.g. if it is being passed to the CreateFontIndirect function, or to the EnumFontFamiliesEx function -- though for latter only a few members are looked at).
And its one member that is most completely overloaded to the point of confusion is the lfHeight member:
Specifies the height, in logical units, of the font's character cell or character. The character height value (also known as the em height) is the character cell height value minus the internal-leading value. The font mapper interprets the value specified in lfHeight in the following manner.
Value | Meaning |
---|---|
> 0 | The font mapper transforms this value into device units and matches it against the cell height of the available fonts. |
0 | The font mapper uses a default height value when it searches for a match. |
< 0 | The font mapper transforms this value into device units and matches its absolute value against the character height of the available fonts. |
For all height comparisons, the font mapper looks for the largest font that does not exceed the requested size.
This mapping occurs when the font is used for the first time.
For the MM_TEXT mapping mode, you can use the following formula to specify a height for a font with a specified point size:
lfHeight = -MulDiv(PointSize, GetDeviceCaps(hDC, LOGPIXELSY), 72);
So basically, there are three different ways that this member can specify the height (depending on whether the 32-bit value in lfHeight is greater than, less than, or equal to zero. None of which map to what most humans (developer or otherwise) would use to specify a font size.
And, to add insult to injury, in attempting to translate between what those humans would want and what the member specifies, it gives a formula that most people do not understand that depends on a functionality that it does not explain (the MM_TEXT mapping mode). You can find out what the MM_TEXT mapping mode is by looking at the GetMapMode and SetMapMode functions. Though they take an HDC and it may be confusing to many people what happens to fonts in different mapping modes since the font and the mapping mode are set in such disconnected contexts....
But let's get back to those three different usages of the lfHeight member. They also have descriptions that start with "The font mapper..." which would indicate they forgot that the LOGFONT is also used to describe a font after the mapping has occurred. Oops?
Another oops along the same line -- there is no indication in the docs about which type of return is expected if you use GetObject with an HFONT. Or if the expectation is that the result would be inconsistent or not. Or how to definitively get the size in such a case.
And then of course, two of them talk about device units (which are also never defined within the function). You can look at the topic Device vs. Design Units to get some clarity around those, though it is of course completely unclear what insight talking about them here provides. How about, since the LOGFONT is in no way tied to a device, avoid mentioning them here entirely?
Ok, never mind all that communal critiquing, Let's take the and take those apart one at a time:
When it is zero, "The font mapper uses a default height value when it searches for a match." What default value? How is it determined? Hmmm... a quick look at the source is very suggestive of a size of 12pt being considered a default, at least since NT 4.0. Hopefully it is never returned when one is querying font information like in a GetObject call. :-)
When it is less than zero, "The font mapper transforms this value into device units and matches its absolute value against the character height of the available fonts." Once again, an undefined phrase "character height" -- what is meant there? Maybe they mean the TEXTMETRIC/NEWTEXTMETRIC's tmHeight member, which "Specifies the height (ascent + descent) of characters." That kind of makes sense, right? Unfortunately, this would be incorrect -- the calculation in this case is based on the the UnitsPerEm mentioned in that Device vs. Design Units topic, "the em square size for the font". And that topic even says how to get it -- by using the ntmSizeEm member of the NEWTEXTMETRIC structure.
When it is greater than zero, "The font mapper transforms this value into device units and matches it against the cell height of the available fonts." And of course the cell height is also not defined. Though this one actually is based on the TEXTMETRIC/NEWTEXTMETRIC's tmHeight member, which "Specifies the height (ascent + descent) of characters" specified in the font.
So given all of the above, at least this lfHeight member can theoretically be figured out, too.
Though it is not particularly intuitive, is it?
On Friday, typography PM Judy came by my office to explain to me that this was the topic that her fellow typography PM Carolyn had mentioned was really confusing. It should be of no suprise to anyone that even people who work in typography find this member to be really confusing -- there is no one outside of a few people over in GDI who would not be confused at this very low level member that has been partially exposed in this not-quite-so-low-level structure.
I mean, how often can we really expect that people will specify it correctly, except when they use the built-in specified formula that maps to point size of the font? And in that case why did they even bother to expose it any other way?
It does help things like old time KB article 74299, which actually explains the things I did above in more practical terms, and is still just as valid beyond NT 4.0, despite what the "Applies To" section claims.
And it helps third parties like Dr. Dobbs come up with useful topics like Font Creation and Rounding Differences when trying to understand why the MulDiv function is needed here (and the problems in the area help people understand why this one usage of MulDiv is easier to find than documentation on the function itself!).
This would be a case where the managed world made it all a little easier, not just in the GDI+ classses around fonts but in helpful topics like this one on interrogating fonts....
So, as bad as the documentation here may be, I think it is fair to blame the actual implementation for any actual confusion here....
This post brought to you by "ཛྷ" (U+0f5c, a.k.a. TIBETAN LETTER DZHA -- a character that is not afraid to use its descender powers!)
# Rainer Bauer on 24 Apr 2006 11:20 AM:
# josh on 24 Apr 2006 12:13 PM:
# Mike on 24 Apr 2006 2:00 PM:
referenced by