iOS Concurrency with GCD and Operations |

Learn how to add concurrency to your apps! Keep your app’s UI responsive to give your users a great user experience, and learn how to avoid common concurrency problems, like race condition, priority inversion and deadlock.

This is a companion discussion topic for the original entry at

Hi, my understanding is that the Metal framework can perform computational operations using the GPU in parallel with operations executing using the CPU. This course seems to be focusing on task parallelism; whereas Metal focuses on data parallelism. Do you have any material that covers data parallelism using both task and data parallelism in the same application?

I don’t know of any. Nor does Caroline Begbie, who co-authored our Metal book.

Maybe @hollance can help? His website:

I’m not really sure I understand what is being asked. If you use Metal in your application, you are by definition doing a data-parallel thing (on the GPU). Typically the rest of your app wouldn’t sit around waiting for the GPU to finish, so you’d also be doing a task-parallel thing (the CPU keeps handling UI, etc while the GPU is running).

Hi, thanks for the responses. I noticed that I made a typing mistake in my question. What I am asking is about combining data parallelism and task parallelism in the same application. From your answers it seems like they are handled quite separately. I was thinking that there may be best practices and common pitfalls for doing both in the same application.

@bluepill Can you give a concrete example of what you’re trying to achieve?

I am thinking of a single application where task concurrency is beneficial for the creation of data, then data concurrency is beneficial for deriving a result from the data; and potentially a final processing step that benefits from task concurrency. Are there good/bad ways to build these kinds of applications to maximise system resource usage so outcomes are fast. A basic question might be “is it a bad idea to run a compute kernel from each of my GCD threads; assuming each thread has a reasonable amount of data?”

Only way to find out is to try it. There is some overhead in launching a GPU job, so the amount of data parallelism has to be worth it. The GPU will also decide for itself how and when to run all these different jobs. Depending on what you’re trying to do, it might be better to combine all the data from the different threads and run one big GPU job rather than a bunch of small ones.

I just finished this tutorial and I can only say it was GREAT, thanks for explaining everything so well and using great examples :grin:

Good work!

1 Like

thanks Roberto! I’m glad you found it useful :smiley:

1 Like