Implementing Confidence-based Work Stealing Search in Gecode

University essay from KTH/Skolan för informations- och kommunikationsteknik (ICT)

Author: Patrik Eklöf; [2014]

Keywords: ;

Abstract: Constraint programming is a field whose goal is to solve extremely large problems with a set of restrictions that define the problem. One such example is generating CPU instructions from source code, because a compiler must choose the optimal instructions that best matches the source code, schedule them optimally to minimize the amount of time it takes to execute the instructions and possibly at the same time also minimize power consumption. The difficulty of the problem lies in that there is no single good way to approach the problem since all parameters are so dependent on each other. For example, if the compiler chooses to minimize the amount of instructions, it usually ends up with large instructions which are big and complex and minimizes the amount of power used. However, they are also more difficult to schedule in an efficient manner in order to reduce the runtime. Choosing many smaller instructions gives more flexibility in scheduling, but draws more power. The compiler must also take caches into account in order to minimize misses which costs power and slows down the execution, making the whole problem even more complex. To find the best solution to such problems, one must typically explore every single possibility and measure which one is fastest. This creates a huge amount of possible solutions which takes a tremendous amount of time to explore to find a solution that meets the requirements (often finding the “optimal” solution). Typically, these problems are abstracted into search trees where they are explored using different techniques. Typically, there are two different ways to parallelize the exploration of search trees. These methods are coarse grained parallel search, which splits exploration into several threads as far up in the tree as possible, near the root, and fine grained parallel search which splits up the work as far down the search tree as possible so that each thread gets only a small subtree to explore. Coarse grained search has the advantage that it can achieve super-linear speedup if the solution is not in the leftmost subtree; otherwise, it wastes all work (compared to DFS). Fine grained search has the advantage that it always achieves linear speedup, but can never achieve super-linear speedup. An interesting way of search known as confidence-based search combines these two approaches. It works by having a set of probabilities for each branch provided by the user (called a confidence model); search method takes the help of probabilities as a guide for how many resources it should spend to explore different subtrees (e.g. if there are 10 threads and a probability of 0.8 that there is a solution in a subtree, the search method sends 8 threads for exploring that subtree; an alternative of looking at the problem is that the search method spends 80% of its resources to explore that subtree and spends the remaining 20% to exploring the rest of the subtrees). As the search method finds failed nodes, it updates the probabilities by taking into account that it is less probable that there is a solution in a subtree where there are more failed nodes. Periodically, the algorithm also restarts, and when it does, it uses the updated probabilities as a guide for where to look for solutions. This thesis took upon the goal of creating such a search for a constraint-based framework called Gecode from scratch. The resulting engine had a lot of potential, and while not perfect, it showed clear signs of super linear speedup for some problems tested with naïve confidence models.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)