![]() Some more thoughts about CPU affinity: Besides the problem of binding a process to the desired CPU socket, you can use CPU affinity also to avoid negative effects of Hyperthreading when that becomes a problem, without having to switch HT off in the BIOS. I think the only possibility to run such applications as intended is on respectively simple host hardware, or maybe within virtual machines with dedicated GPU pass-through (if this is possible at all for GPU computing again I speak entirely theoretical, not from experience). And Moo!Wrapper is hardwired to use all GPUs at once, fed by a single process. One application of a project which I don't remember right now is hardwired to use GPU0 only. In the working directory of client instance #1, add to cc_config.xml something like this:īut as I said, it works mostly, but not always. And here comes a catch: There is support in boinc-client to configure this, and it works most of the time, but not all the time. However, you still need to tell instance #1 to use only GPU0.GPU2, and instance #2 to use only GPU3 (or however these are numbered). Instances #1 and #2 don't need a restriction of how many CPUs they can use, as long as you let these instances run only GPU projects. Instead, just restrict boinc-client instance #3 to as few CPUs as desired (e.g. This would have at least one benefit: You would no longer have to write app_config.xml files for each and every GPU application in which you set 0.001. If multiple client instances are indeed an element to solve processor affinity of GPU feeder processes, then I would go a step further and have those two client instances for GPU projects and a third client instance for CPU projects. Process Lasso then needs to be instructed that all new processes launched from one client shall be bound to CPU0, and all new processes launched from the other client go to CPU1. Maybe this is possible if you have separate boinc-client instances for this. Notably I wonder whether it is possible at all to have Process Lasso detect without your manual intervention which GPU feeder processes should run on CPU0, and which one should run on CPU1. I haven't tried it myself yet, and haven't researched its precise capabilities. In order to control processor affinity on Windows, I have several times read people recommending Process Lasso, But I am not aware of direct support for processor affinity in boinc-client (and boincmgr, boinccmd, boinctasks etc.), which means you need an external tool. So you'd like to configure processor affinity of GPU feeder processes. However, GPU-GPU communication is not required in any Distributed Computing project, as far as I know.) I suppose these are simply boards with PCIe switches on them. (By the way, some server mainboard makers offer special "single PCIe root" boards for multi GPU computing applications in which the GPUs need to communicate with each other. ![]() Otherwise, such DMA would involve both CPUs, and the QPI link between them. That way, DMA to/from the GPU stays local to this CPU. I think the underlying problem which you want to solve is that a GPU feeder process should allocate memory local to that CPU to which the GPU is attached. (Speaking theoretically, not out of own experience.) ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |