Lately we have been provided with several different solutions to how we can utilize multicore CPUs. Previously we had few other options besides creating multithreaded applications explicitly. By explicitly I mean the developer had to create and manage threats manually.
Both Grand Central dispatch and Qt Concurrent is based on the idea that the developer submits chunks of work to a library which will distribute the chunks of work onto several different threads which can be running on several different cores.
Below is a code example showing Qt's approach to distributing chunks of work to different threads. To use the right terminology each chunk of work is called a task. The code below shows how a strain is split by running the splitting method in a separate thread.
Run, is an asynchronous call. It will hand over its task to a separate thread, return and continue execution of the main thread. Qt uses the concept of a future object to store results from asynchronous method calls. The QFuture object won't actually contain the results of this split method until it's queried for the results. It's a separate thread has finished processing one the future objects is queried it simply returns the results. If it hasn't, the main thread will be blocked until separate thread finishes.
The next piece of code shows Grand Central's approach to the same problem. Instead of using future objects, to synchronize between different threads, Grand Central uses so called dispatch queues. the developer submits tasks to different queues, then Grand Central will dequeue the tasks and place them on to different threads. Tasks in different queues always run concurrently with respect to each other. However task within the same queue might or might not run concurrently with each other, depending on the type of queue. There are two types of cues: serial cues and concurrent queues. In a serial queue a task is not performed until the previously enqueued task has been performed. In the code below we create a serial queue.
In Grand Central dispatch a task is represented by a code block. Code blocks is an extension by Apple to the C-language and C++. In languages like Ruby and Python code blocks are referred to as closures. Essentially they let you define a function within another function. This function has access to all variables defined outside its scope and any variable referred to in the code block will continue to live even after the outer scope no longer exists. But only as read-only variables. If you want to write to a variable outside the scope of the block, then you have to prefix variable with
dispatch_async() is similar to the run method in Qt. The provided code block is passed on to a separate thread amber function call returns immediately to the main thread. The code splitting the string will run in parallel to all the code it follows after the asynchronous dispatch. But since we don't have a future object, how can we safely read the results variable in the main thread? Normally one would have solved for these kind of thread issues with semaphores.
There are two ways to read the result variable safely. One can dispatch a task either asynchronous or synchronous to the same queue, which reads the result variable. Since the queue we have created is a serial queue, the reading task would not be executed until all previously and queued tasks have been finished. In this case we have chosen to call
dispatch_sync(). It will block the main thread until a submitted task has been executed. Thus, in this case we know that after the
dispatch_sync() call the only running thread is the main thread.
The metaphor used by queue might seem simpler to understand and deal with, but it trades and simplicity for flexibility. The biggest problem is that there is no obvious way to control access to shared resources without using semaphores. With Grand Central's approach one can take code that used to be in a critical section, plays in a code block and do a dispatch to a queue. Instead of protecting the access in to share resource with the same semaphore, one can protect the access by submitting tasks to the same serial queue. This will make sure that all code that accesses a share resource does so in strict sequence. This is a clear benefit over using semaphores, because with semaphores one does not know in which sequence the threads access a shared resource.