The logic of how things work in Cocoa is quite different from most other OO toolkits, especially when it comes to graphics. Here I will give a comparison to how one does some basic graphics operations in Cocoa as opposed to Qt.
I might have chosen to compare Cocoa to Java or .NET frameworks but I am most familiar with Qt. Qt graphics model is quite similar to that use in Java and .NET so the differences explained here should be of interest to Java and C# programers as well.
Introduction
Cocoa and Qt have similar image models for graphics. Both are able to describe images independent of output device (device on which image is displayed) and information about points in drawings (vertices) are retained in the model so that different transformations can be applied to it (e.g. affine transformations.)
Programmin models
However their programming models are different. A programming model can be statefull or stateless and procedure oriented or object oriented. E.g. OpenGL has a statefull and procedure oriented programming model while DirectX has a stateless and object oriented programming model.
In a statefull programming model all drawing operations are affected by the state of the graphics system. E.g. a command to draw a box might be drawBox(x, y, width, height)
. This would draw a box with current color. The color is then a state in the graphics system. In a stateless system the command would be drawBox(x, y, width, height, color).
The reason why many people find Cocoa confusing at first is that it uses a statefull and object oriented programming model. This is uncommon as Qt, Java and .NET all use a stateless object oriented programming model. In this respect Cocoa has a lot of similarities with OpenGL.
Setting drawing target
In Qt drawing commands change the image or graphics of a QPaintDevice
. Typically one of the subclasses like QWidget
is used. Below is an example on how one can draw a line of thickness 2 and blue color through three points p1, p2 and p3 on some arbitrary widget returned by getCanvas()
.
QPainter paint(w);
QPen pen(Qt::blue);
pen.setWidthF(2.0);
paint.setPen(pen);
QPoint p1(x1, y1);
QPoint p2(x2, y2);
QPoint p3(x3, y3);
paint.drawLine(p1, p2);
paint.drawLine(p2, p3);
As one can see the surface to draw on is provided specifically to the QPainter
object as well as the pen and color to use. The graphics system is not in any state. The result of the drawing operations is determined exclusively by the objects involved an their attributes.
Below is an example on how to do the same with Cocoa:
NSBezierPath* path = [[NSBezierPath alloc] init];
[[NSColor] blueColor] set];
[path setLineWidth:2.0];
NSPoint p1 = NSMakePoint(x1, y1);
NSPoint p2 = NSMakePoint(x2, y2);
NSPoint p2 = NSMakePoint(x3, y3);
[path moveToPoint:p1];
[path lineToPoint:p2];
[path lineToPoint:p3];
[view lockFocus];
[path stroke];
[view unlockFocus];
As one can see there are some noticeable differences. There is no one object that represents the state of the graphics system like QPainter
. NSBezierPath
might look like it but it merely keeps track of points on a path and how the lines that connect the points are. No color information is passed to NSBezierPath
in the form of a NSColor
object nor is the surface on which to draw specified as in the Qt example.
Instead the state of the graphics system itself is changed. [color set]
is used to change the color state of the graphics system. Likewise [view lockFocus]
is used change the state for current drawing area in the graphics system. Usually when a views method to update its area is called, the state of the drawing area has already been set to that view. So most of the time the user does not have to call lockFocus
himself.
Drawing bitmaps
One are that is difficult to understand once one gets started with Cocoa graphics is how to deal with bitmaps. It is confusing precisely because when one is not used statefull programming models for graphics it is not obvious how one draws into a bitmap.Both Qt and Cocoa have two different image formats. However comparing them is difficult because there is no one to one correspondence in functionality. Qt has QPixmap
and QImage
. QPixmap
is used for drawing images that will go on screen. Because graphics can be created in many different ways on screen individual pixel access is not possible. QImage
on the other hand exists off screen in memory an allows individual pixel manipulation. However to put a QImage
on screen it has to be converted to a QPixmap
In Cocoa there is a similar situation. NSImage
vaguely coresponds to QPixmap
. You can not access its individual pixels but you can draw into it and display it on screen. NSBitmapImageRep
corresponds roughly to QImage
. You can access its pixels individually and set up exactly how pixels are stored, how many color components are used etc. However until recently you could not draw directly into it. Instead you would draw to a NSImage
. The reason for this is that NSImage
can represent an offscreen window, while NSBitmapImageRep
is just a bunch of bytes in memory. Windows area can be drawn on by graphics hardware so they can be represented in any number of ways. They could exist in special graphics memory or on a remote machine. Thus accessing individual pixels on an NSImage
makes no sense. However giving drawing commands does because the graphics hardware have drawing commands implemented to do it fast. Doing drawing on NSBitmapImageRep
does not make sense because it is not accessible to graphics hardware and drawing can be done by graphics hardware.
Below is an example on how to manipulate the pixels in a QPixmap
QImage img = pixmap.toImage();
for (int y=0; y < img.height(); ++y) {
''''for (int x=0; x < img.width(); ++x) {
''''''''QRgb pix = img.pixel(x,y);
''''''''doSomethingWithPixel(&pix);
''''''''img.setPixel(x,y,pix);
''''}
}
The code below shows how to do the same thing in Cocoa. That is, to manipulate the pixels in a NSImage
. Notice how you must lockFocus
on the NSImage
to for the pixel grabbing to occur on that particular image.
NSRect rect = NSMakeRect(NSZeroPoint, [pixmap size]);
[pixmap lockFocus]; // Make pixmap target for drawing commands
NSBitmapImageRep* img = [[NSBitmapImageRep alloc] initWithFocusedViewRect:rect];
[pixmap unlockFocus];
for (int y=0; y < [img pixelsHigh]; ++y) {
''''for (int x=0; x < [img pixelsWide]; ++x) {
''''''''NSColor* pix = [img colorAtX:x y:y];
''''''''doSomethingWithPixel(pix);
''''''''[img setColor:pix atX:x y:y];
''''}
}
[pixmap addRepresentation:img];
Drawing images on screen
To round I need to show how you draw images on screen, or more specifically in a window. AQWidget
in Qt's case and a NSView
in Cocoa's case. Below we can see how to draw rectangular area within an image given by srcRect
inside a rectangular area inside a window (QWidget) given by dstRect
Pixmap pixmap = getImage(); // Return some image we want to draw
QPainter paint(w);
QRectF dstRect(x1, y1, width1, height1) // Area of window to draw into
QRectF srcRect(x1, y1, width2, height2); // Area of image to draw
paint.drawPixmap(dstRect, pixmap, srcRect);
Below we have the corresponding code for Cocoa. Notice that the drawing method is called on the NSImage
itself. This does not however mean that drawing is performed inside the image as Qt, Java and C# programmers would easily assume. The target for drawing commands is always the surface/canvas which has focus.
NSImage* pixmap = getImage(); // Return some image we want to draw
NSRect dstRect = NSMakeRect(x1, y1, width1, heigth1); // Area of window to draw into
NSRect srcRect = NSMakeRect(x2, y2, width2, heigth2); // Area of image to draw
[w lockFocus];
[pixmap drawInRect:dstRect fromRect:srcRect operation:NSCompositeCopy fraction:1.0];
[w unlockFocus];
Final thoughts
It should be clear that drawing in Cocoa takes some time getting used to for Qt, Java or C# programmers. I have only scratched the surface in this post. From my own experience using both Java and Qt, it is a lot easier to get up to speed on graphics in Qt and Java at first.However as is typical with everything else with Cocoa it might not be fast to get started but when you dwell into more complicate things, that is when Cocoa start to shine. Likewise with graphics. My own impression from using it (although I am no wizard in Cocoa graphics) is that when it comes to more complicated graphics Cocoa is much more capable than Qt.
It is also my impression that for short examples as given here Qt and Java would usually require less code, but when the graphics is more complicated less code is required in Cocoa.
However I must say that dealing with bitmap graphics seems overly complicated at times in Cocoa. Probably owing a lot to the fact that NEXTStep had totally vector based graphics. The graphics system would output post script drawing commands to the window server and not individual pixels. Basically vector graphics was made easy at the expense of pixel manipulation and raster graphics.