mirror of
https://github.com/godotengine/godot.git
synced 2024-12-27 11:24:59 +08:00
9444d297ed
This commits rewrites the sync logic in a way that the `use_system_threads_for_low_priority_tasks` setting, which was added due to the lack of a cross-platform wait-for-multiple-objects functionality, can be removed (it's as if it was effectively hardcoded to `false`). With the new implementation, we have the best of both worlds: threads don't have to poll, plus no bespoke threads are used. In addition, regarding deadlock prevention, since not every possible case of wait-deadlock could be avoided, this commits removes the current best-effort avoidance mechanisms and keeps only a simple, pessimistic way of detection. It turns out that the only current user of deadlock prevention, ResourceLoader, works fine with it and so every possible situation in resource loading is now properly handled, with no possibilities of deadlocking. There's a comment in the code with further details. Lastly, a potential for load tasks never being awaited/disposed is cleared.
114 lines
6.4 KiB
XML
114 lines
6.4 KiB
XML
<?xml version="1.0" encoding="UTF-8" ?>
|
|
<class name="WorkerThreadPool" inherits="Object" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="../class.xsd">
|
|
<brief_description>
|
|
A singleton that allocates some [Thread]s on startup, used to offload tasks to these threads.
|
|
</brief_description>
|
|
<description>
|
|
The [WorkerThreadPool] singleton allocates a set of [Thread]s (called worker threads) on project startup and provides methods for offloading tasks to them. This can be used for simple multithreading without having to create [Thread]s.
|
|
Tasks hold the [Callable] to be run by the threads. [WorkerThreadPool] can be used to create regular tasks, which will be taken by one worker thread, or group tasks, which can be distributed between multiple worker threads. Group tasks execute the [Callable] multiple times, which makes them useful for iterating over a lot of elements, such as the enemies in an arena.
|
|
Here's a sample on how to offload an expensive function to worker threads:
|
|
[codeblocks]
|
|
[gdscript]
|
|
var enemies = [] # An array to be filled with enemies.
|
|
|
|
func process_enemy_ai(enemy_index):
|
|
var processed_enemy = enemies[enemy_index]
|
|
# Expensive logic...
|
|
|
|
func _process(delta):
|
|
var task_id = WorkerThreadPool.add_group_task(process_enemy_ai, enemies.size())
|
|
# Other code...
|
|
WorkerThreadPool.wait_for_group_task_completion(task_id)
|
|
# Other code that depends on the enemy AI already being processed.
|
|
[/gdscript]
|
|
[csharp]
|
|
private List<Node> _enemies = new List<Node>(); // A list to be filled with enemies.
|
|
|
|
private void ProcessEnemyAI(int enemyIndex)
|
|
{
|
|
Node processedEnemy = _enemies[enemyIndex];
|
|
// Expensive logic here.
|
|
}
|
|
|
|
public override void _Process(double delta)
|
|
{
|
|
long taskId = WorkerThreadPool.AddGroupTask(Callable.From<int>(ProcessEnemyAI), _enemies.Count);
|
|
// Other code...
|
|
WorkerThreadPool.WaitForGroupTaskCompletion(taskId);
|
|
// Other code that depends on the enemy AI already being processed.
|
|
}
|
|
[/csharp]
|
|
[/codeblocks]
|
|
The above code relies on the number of elements in the [code]enemies[/code] array remaining constant during the multithreaded part.
|
|
[b]Note:[/b] Using this singleton could affect performance negatively if the task being distributed between threads is not computationally expensive.
|
|
</description>
|
|
<tutorials>
|
|
<link title="Using multiple threads">$DOCS_URL/tutorials/performance/using_multiple_threads.html</link>
|
|
<link title="Thread-safe APIs">$DOCS_URL/tutorials/performance/thread_safe_apis.html</link>
|
|
</tutorials>
|
|
<methods>
|
|
<method name="add_group_task">
|
|
<return type="int" />
|
|
<param index="0" name="action" type="Callable" />
|
|
<param index="1" name="elements" type="int" />
|
|
<param index="2" name="tasks_needed" type="int" default="-1" />
|
|
<param index="3" name="high_priority" type="bool" default="false" />
|
|
<param index="4" name="description" type="String" default="""" />
|
|
<description>
|
|
Adds [param action] as a group task to be executed by the worker threads. The [Callable] will be called a number of times based on [param elements], with the first thread calling it with the value [code]0[/code] as a parameter, and each consecutive execution incrementing this value by 1 until it reaches [code]element - 1[/code].
|
|
The number of threads the task is distributed to is defined by [param tasks_needed], where the default value [code]-1[/code] means it is distributed to all worker threads. [param high_priority] determines if the task has a high priority or a low priority (default). You can optionally provide a [param description] to help with debugging.
|
|
Returns a group task ID that can be used by other methods.
|
|
</description>
|
|
</method>
|
|
<method name="add_task">
|
|
<return type="int" />
|
|
<param index="0" name="action" type="Callable" />
|
|
<param index="1" name="high_priority" type="bool" default="false" />
|
|
<param index="2" name="description" type="String" default="""" />
|
|
<description>
|
|
Adds [param action] as a task to be executed by a worker thread. [param high_priority] determines if the task has a high priority or a low priority (default). You can optionally provide a [param description] to help with debugging.
|
|
Returns a task ID that can be used by other methods.
|
|
</description>
|
|
</method>
|
|
<method name="get_group_processed_element_count" qualifiers="const">
|
|
<return type="int" />
|
|
<param index="0" name="group_id" type="int" />
|
|
<description>
|
|
Returns how many times the [Callable] of the group task with the given ID has already been executed by the worker threads.
|
|
[b]Note:[/b] If a thread has started executing the [Callable] but is yet to finish, it won't be counted.
|
|
</description>
|
|
</method>
|
|
<method name="is_group_task_completed" qualifiers="const">
|
|
<return type="bool" />
|
|
<param index="0" name="group_id" type="int" />
|
|
<description>
|
|
Returns [code]true[/code] if the group task with the given ID is completed.
|
|
</description>
|
|
</method>
|
|
<method name="is_task_completed" qualifiers="const">
|
|
<return type="bool" />
|
|
<param index="0" name="task_id" type="int" />
|
|
<description>
|
|
Returns [code]true[/code] if the task with the given ID is completed.
|
|
</description>
|
|
</method>
|
|
<method name="wait_for_group_task_completion">
|
|
<return type="void" />
|
|
<param index="0" name="group_id" type="int" />
|
|
<description>
|
|
Pauses the thread that calls this method until the group task with the given ID is completed.
|
|
</description>
|
|
</method>
|
|
<method name="wait_for_task_completion">
|
|
<return type="int" enum="Error" />
|
|
<param index="0" name="task_id" type="int" />
|
|
<description>
|
|
Pauses the thread that calls this method until the task with the given ID is completed.
|
|
Returns [constant @GlobalScope.OK] if the task could be successfully awaited.
|
|
Returns [constant @GlobalScope.ERR_INVALID_PARAMETER] if a task with the passed ID does not exist (maybe because it was already awaited and disposed of).
|
|
Returns [constant @GlobalScope.ERR_BUSY] if the call is made from another running task and, due to task scheduling, there's potential for deadlocking (e.g., the task to await may be at a lower level in the call stack and therefore can't progress). This is an advanced situation that should only matter when some tasks depend on others (in the current implementation, the tricky case is a task trying to wait on an older one).
|
|
</description>
|
|
</method>
|
|
</methods>
|
|
</class>
|