diff --git a/Exceptions_8cs.html b/Exceptions_8cs.html index 78e45966..618d7d2e 100644 --- a/Exceptions_8cs.html +++ b/Exceptions_8cs.html @@ -82,7 +82,7 @@  Exception thrown if a Parallel.ParallelRegion is created inside of another Parallel.ParallelRegion. More...
  class  DotMP.CannotPerformNestedWorksharingException - Exception thrown if a Parallel.Single is created inside of a Parallel.For or Parallel.ForReduction<T>. More...
+ Exception thrown if a Parallel.Single is created inside of a Parallel.For or Parallel.ForReduction<T>. More...
  class  DotMP.InvalidArgumentsException  Exception thrown if invalid arguments are specified to DotMP functions. More...
diff --git a/Iter_8cs.html b/Iter_8cs.html index eb3d7377..3682cfe8 100644 --- a/Iter_8cs.html +++ b/Iter_8cs.html @@ -67,8 +67,7 @@
Classes | -Namespaces | -Enumerations
+Namespaces
Iter.cs File Reference
@@ -76,6 +75,21 @@ + + + + + + + + + + + + + + + @@ -84,16 +98,6 @@ Namespaces -

Classes

class  DotMP.Schedule
 Represents the various scheduling strategies for parallel for loops. Detailed explanations of each scheduling strategy are provided alongside each getter. If no schedule is specified, the default is Schedule.Static. More...
 
class  DotMP.StaticScheduler
 Implementation of static scheduling. More...
 
class  DotMP.DynamicScheduler
 Implementation of dynamic scheduling. More...
 
class  DotMP.GuidedScheduler
 Implementation of guided scheduling. More...
 
class  DotMP.RuntimeScheduler
 Placeholder for the runtime scheduler. Is not meant to be called directly. The Parallel.FixArgs method should detect its existence and swap it out for another scheduler with implementations. More...
 
class  DotMP.Iter
 Contains all of the scheduling code for parallel for loops. More...
 
namespace  DotMP
 
- - - -

-Enumerations

enum class  DotMP.Schedule { DotMP.Static -, DotMP.Dynamic -, DotMP.Guided -, DotMP.Runtime - }
 Represents the various scheduling strategies for parallel for loops. Detailed explanations of each scheduling strategy are provided alongside each enumeration value. If no schedule is specified, the default is Schedule.Static. More...
 
diff --git a/ParallelTests_8cs.html b/ParallelTests_8cs.html index a67c5451..0c289f39 100644 --- a/ParallelTests_8cs.html +++ b/ParallelTests_8cs.html @@ -78,6 +78,9 @@ class  DotMPTests.ParallelTests  Tests for the DotMP library. More...
  +class  Serial + Custom scheduler which runs a for loop in serial. More...
+  diff --git a/Scheduler_8cs.html b/Scheduler_8cs.html new file mode 100644 index 00000000..04b55467 --- /dev/null +++ b/Scheduler_8cs.html @@ -0,0 +1,93 @@ + + + + + + + +DotMP: DotMP/Scheduler.cs File Reference + + + + + + + + + +
+
+

Namespaces

+ + + + + +
+
DotMP +
+
+ + + + + + + + + +
+
+ + +
+ +
+ + + +
+
+Classes | +Namespaces
+
+
Scheduler.cs File Reference
+
+
+ + + + + +

+Classes

interface  DotMP.IScheduler
 Interface for user-defined schedulers. More...
 
+ + + +

+Namespaces

namespace  DotMP
 
+
+ + + + diff --git a/Wrappers_8cs.html b/Wrappers_8cs.html index 94a68403..fbb4a391 100644 --- a/Wrappers_8cs.html +++ b/Wrappers_8cs.html @@ -78,7 +78,7 @@

Classes

class  DotMP.ForAction< T > - Class encapsulating all of the possible callbacks in a Parallel.For-style loop. This includes Parallel.For, Parallel.ForReduction<T>, Parallel.ForCollapse, and Parallel.ForReductionCollapse<T>. More...
+ Class encapsulating all of the possible callbacks in a Parallel.For-style loop. This includes Parallel.For, Parallel.ForReduction<T>, Parallel.ForCollapse, and Parallel.ForReductionCollapse<T>. More...
  - + - - - - - - - - - + + + + + + + + + + + + + + + +

diff --git a/annotated.html b/annotated.html index 34815ff3..1d6f9f25 100644 --- a/annotated.html +++ b/annotated.html @@ -73,23 +73,30 @@

 CDAGDAG for maintaining task dependencies
 CNotInParallelRegionExceptionException thrown if a parallel-only construct is used outside of a parallel region
 CCannotPerformNestedParallelismExceptionException thrown if a Parallel.ParallelRegion is created inside of another Parallel.ParallelRegion
 CCannotPerformNestedWorksharingExceptionException thrown if a Parallel.Single is created inside of a Parallel.For or Parallel.ForReduction<T>
 CCannotPerformNestedWorksharingExceptionException thrown if a Parallel.Single is created inside of a Parallel.For or Parallel.ForReduction<T>
 CInvalidArgumentsExceptionException thrown if invalid arguments are specified to DotMP functions
 CRegionContains relevant internal information about parallel regions, including the threads and the function to be executed. Provides a region-wide lock and SpinWait objects for each thread
 CForkedRegionContains the Region object and controls for creating and starting a parallel region
 CThrEncapsulates a Thread object with information about its progress through a parallel for loop. For keeping track of its progress through a parallel for loop, we keep track of the current next iteration of the loop to be worked on, and the iteration the current thread is currently working on
 CWorkShareContains all relevant information about a parallel for loop. Contains a collection of Thr objects, the loop's start and end iterations, the chunk size, the number of threads, and the number of threads that have completed their work
 CIterContains all of the scheduling code for parallel for loops
 CLockA lock that can be used in a parallel region. Also contains instance methods for locking. Available methods are Set, Unset, and Test
 CParallelThe main class of DotMP. Contains all the main methods for parallelism. For users, this is the main class you want to worry about, along with Lock, Shared, and Atomic
 CSectionsContainerStatic class that contains necessary information for sections. Sections allow for the user to submit multiple actions to be executed in parallel. A sections region contains a collection of actions to be executed, specified as Parallel.Section directives. More information can be found in the Parallel.Sections documentation
 CSharedA shared variable that can be used in a parallel region. This allows for a variable to be declared inside of a parallel region that is shared among all threads, which has some nice use cases
 CSharedEnumerableA specialization of Shared for items that can be indexed with square brackets. The DotMP-parallelized Conjugate Gradient example shows this off fairly well inside of the SpMV function
 CTaskingContainerA simple container for a Queue<Action> for managing tasks. Will grow in complexity as dependencies are added and a dependency graph must be generated
 CTaskUUIDTask UUID as returned from Parallel.Task
 CForActionClass encapsulating all of the possible callbacks in a Parallel.For-style loop. This includes Parallel.For, Parallel.ForReduction<T>, Parallel.ForCollapse, and Parallel.ForReductionCollapse<T>
 CScheduleRepresents the various scheduling strategies for parallel for loops. Detailed explanations of each scheduling strategy are provided alongside each getter. If no schedule is specified, the default is Schedule.Static
 CStaticSchedulerImplementation of static scheduling
 CDynamicSchedulerImplementation of dynamic scheduling
 CGuidedSchedulerImplementation of guided scheduling
 CRuntimeSchedulerPlaceholder for the runtime scheduler. Is not meant to be called directly. The Parallel.FixArgs method should detect its existence and swap it out for another scheduler with implementations
 CIterContains all of the scheduling code for parallel for loops
 CLockA lock that can be used in a parallel region. Also contains instance methods for locking. Available methods are Set, Unset, and Test
 CParallelThe main class of DotMP. Contains all the main methods for parallelism. For users, this is the main class you want to worry about, along with Lock, Shared, and Atomic
 CISchedulerInterface for user-defined schedulers
 CSectionsContainerStatic class that contains necessary information for sections. Sections allow for the user to submit multiple actions to be executed in parallel. A sections region contains a collection of actions to be executed, specified as Parallel.Section directives. More information can be found in the Parallel.Sections documentation
 CSharedA shared variable that can be used in a parallel region. This allows for a variable to be declared inside of a parallel region that is shared among all threads, which has some nice use cases
 CSharedEnumerableA specialization of Shared for items that can be indexed with square brackets. The DotMP-parallelized Conjugate Gradient example shows this off fairly well inside of the SpMV function
 CTaskingContainerA simple container for a Queue<Action> for managing tasks. Will grow in complexity as dependencies are added and a dependency graph must be generated
 CTaskUUIDTask UUID as returned from Parallel.Task
 CForActionClass encapsulating all of the possible callbacks in a Parallel.For-style loop. This includes Parallel.For, Parallel.ForReduction<T>, Parallel.ForCollapse, and Parallel.ForReductionCollapse<T>
 NDotMPTests
 CParallelTestsTests for the DotMP library
 CSerialCustom scheduler which runs a for loop in serial
diff --git a/classDotMPTests_1_1ParallelTests-members.html b/classDotMPTests_1_1ParallelTests-members.html index 2699797c..3eff761e 100644 --- a/classDotMPTests_1_1ParallelTests-members.html +++ b/classDotMPTests_1_1ParallelTests-members.html @@ -77,50 +77,51 @@ Collapse_works()DotMPTests.ParallelTestsinline CreateRegion()DotMPTests.ParallelTestsinlineprivatestatic Critical_works()DotMPTests.ParallelTestsinline - Dynamic_should_produce_correct_results()DotMPTests.ParallelTestsinline - Get_and_Set_NumThreads_work()DotMPTests.ParallelTestsinline - GetNested_works()DotMPTests.ParallelTestsinline - GetWTime_works()DotMPTests.ParallelTestsinline - Guided_should_produce_correct_results()DotMPTests.ParallelTestsinline - InnerWorkload(int j, float[] a, float[] b, float[] c)DotMPTests.ParallelTestsinlineprivatestatic - InParallel_works()DotMPTests.ParallelTestsinline - Invalid_params_should_except()DotMPTests.ParallelTestsinline - Locks_work()DotMPTests.ParallelTestsinline - Master_works()DotMPTests.ParallelTestsinline - Nested_parallelism_should_except()DotMPTests.ParallelTestsinline - Nested_task_dependencies_work()DotMPTests.ParallelTestsinline - Nested_tasks_work()DotMPTests.ParallelTestsinline - Nested_worksharing_should_except()DotMPTests.ParallelTestsinline - Non_for_ordered_should_except()DotMPTests.ParallelTestsinline - Non_parallel_barrier_should_except()DotMPTests.ParallelTestsinline - Non_parallel_critical_should_except()DotMPTests.ParallelTestsinline - Non_parallel_for_should_except()DotMPTests.ParallelTestsinline - Non_parallel_GetThreadNum_should_except()DotMPTests.ParallelTestsinline - Non_parallel_master_should_except()DotMPTests.ParallelTestsinline - Non_parallel_sections_should_except()DotMPTests.ParallelTestsinline - Non_parallel_single_should_except()DotMPTests.ParallelTestsinline - Ordered_works()DotMPTests.ParallelTestsinline - Parallel_performance_should_be_higher()DotMPTests.ParallelTestsinline - Parallel_should_work()DotMPTests.ParallelTestsinline - Parallelfor_should_work()DotMPTests.ParallelTestsinline - Reduction_collapse_works()DotMPTests.ParallelTestsinline - Reduction_works()DotMPTests.ParallelTestsinline - saxpy_parallelfor(float a, float[] x, float[] y)DotMPTests.ParallelTestsinlineprivate - saxpy_parallelregion_for(float a, float[] x, float[] y, Schedule schedule, uint? chunk_size)DotMPTests.ParallelTestsinlineprivate - saxpy_parallelregion_for_taskloop(float a, float[] x, float[] y, uint? grainsize)DotMPTests.ParallelTestsinlineprivate - Schedule_runtime_works()DotMPTests.ParallelTestsinline - Sections_works()DotMPTests.ParallelTestsinline - SetDynamic_works()DotMPTests.ParallelTestsinline - Shared_works()DotMPTests.ParallelTestsinline - SharedEnumerable_works()DotMPTests.ParallelTestsinline - Single_works()DotMPTests.ParallelTestsinline - Static_should_produce_correct_results()DotMPTests.ParallelTestsinline - Task_dependencies_work()DotMPTests.ParallelTestsinline - Tasking_works()DotMPTests.ParallelTestsinline - Taskloop_dependencies_work()DotMPTests.ParallelTestsinline - Taskloop_only_if_works()DotMPTests.ParallelTestsinline - Taskloop_should_produce_correct_results()DotMPTests.ParallelTestsinline - Workload(bool inParallel)DotMPTests.ParallelTestsinlineprivatestatic + Custom_scheduler_works()DotMPTests.ParallelTestsinline + Dynamic_should_produce_correct_results()DotMPTests.ParallelTestsinline + Get_and_Set_NumThreads_work()DotMPTests.ParallelTestsinline + GetNested_works()DotMPTests.ParallelTestsinline + GetWTime_works()DotMPTests.ParallelTestsinline + Guided_should_produce_correct_results()DotMPTests.ParallelTestsinline + InnerWorkload(int j, float[] a, float[] b, float[] c)DotMPTests.ParallelTestsinlineprivatestatic + InParallel_works()DotMPTests.ParallelTestsinline + Invalid_params_should_except()DotMPTests.ParallelTestsinline + Locks_work()DotMPTests.ParallelTestsinline + Master_works()DotMPTests.ParallelTestsinline + Nested_parallelism_should_except()DotMPTests.ParallelTestsinline + Nested_task_dependencies_work()DotMPTests.ParallelTestsinline + Nested_tasks_work()DotMPTests.ParallelTestsinline + Nested_worksharing_should_except()DotMPTests.ParallelTestsinline + Non_for_ordered_should_except()DotMPTests.ParallelTestsinline + Non_parallel_barrier_should_except()DotMPTests.ParallelTestsinline + Non_parallel_critical_should_except()DotMPTests.ParallelTestsinline + Non_parallel_for_should_except()DotMPTests.ParallelTestsinline + Non_parallel_GetThreadNum_should_except()DotMPTests.ParallelTestsinline + Non_parallel_master_should_except()DotMPTests.ParallelTestsinline + Non_parallel_sections_should_except()DotMPTests.ParallelTestsinline + Non_parallel_single_should_except()DotMPTests.ParallelTestsinline + Ordered_works()DotMPTests.ParallelTestsinline + Parallel_performance_should_be_higher()DotMPTests.ParallelTestsinline + Parallel_should_work()DotMPTests.ParallelTestsinline + Parallelfor_should_work()DotMPTests.ParallelTestsinline + Reduction_collapse_works()DotMPTests.ParallelTestsinline + Reduction_works()DotMPTests.ParallelTestsinline + saxpy_parallelfor(float a, float[] x, float[] y)DotMPTests.ParallelTestsinlineprivate + saxpy_parallelregion_for(float a, float[] x, float[] y, Schedule schedule, uint? chunk_size)DotMPTests.ParallelTestsinlineprivate + saxpy_parallelregion_for_taskloop(float a, float[] x, float[] y, uint? grainsize)DotMPTests.ParallelTestsinlineprivate + Schedule_runtime_works()DotMPTests.ParallelTestsinline + Sections_works()DotMPTests.ParallelTestsinline + SetDynamic_works()DotMPTests.ParallelTestsinline + Shared_works()DotMPTests.ParallelTestsinline + SharedEnumerable_works()DotMPTests.ParallelTestsinline + Single_works()DotMPTests.ParallelTestsinline + Static_should_produce_correct_results()DotMPTests.ParallelTestsinline + Task_dependencies_work()DotMPTests.ParallelTestsinline + Tasking_works()DotMPTests.ParallelTestsinline + Taskloop_dependencies_work()DotMPTests.ParallelTestsinline + Taskloop_only_if_works()DotMPTests.ParallelTestsinline + Taskloop_should_produce_correct_results()DotMPTests.ParallelTestsinline + Workload(bool inParallel)DotMPTests.ParallelTestsinlineprivatestatic