Complexity and Analysis of Distributed Algorithms (16w5152)

Arriving in Oaxaca, Mexico Sunday, November 27 and departing Friday December 2, 2016

Organizers

(University of Calgary)

(Technion - Israel Institute of Technology)

(Universidad Nacional Autonoma de Mexico)

Objectives

One of the main challenges faced today is to enable algorithms to fully exploit the parallelism provided by hardware. Shared memory algorithms need to cope with the inherent asynchrony and faulty process behavior in parallel systems. In the light of recent technological developments, most conspicuously, multi-core processors, distributed shared memory algorithms have gained huge importance for every-day and high-performance computing. The design of correct shared memory algorithms, as well as their efficiency analysis and correctness proofs remain a challenging task.

This workshop will bring together international experts, promising junior researchers, and some exceptional graduate students and postdocs, to exchange ideas on how the research on the design and analysis of fundamental algorithmic problems in shared memory systems can be advanced. Invitees will cover a broad area of expertise. While the emphasis will be on the theory of shared memory algorithms, experts on other areas of concurrent and distributed computing (e.g., networks and message passing systems) and mathematics (combinatorics and topology) will be invited. We will also invite some researchers with systems background and from industry research labs, whose expertise will help identify problems and approaches that are relevant for practical applications. This will on one hand allow shared memory researchers to benefit from techniques and experiences present in other fields, and on the other hand enable dissemination of new results and ideas to related research areas.

One goal is to identify novel techniques to design and analyze efficient randomized algorithms, and to understand limits of their efficiency. Questions of interest include, but are not limited to:
  • Present and discuss novel randomized algorithms or impossibility results for fundamental problems of distributed computing.
  • Can randomized algorithms and analysis techniques used in other models (e.g., message passing) be applied to shared memory models and vice versa?
  • How variations in the model affect the efficiency of possible algorithms?
  • How much does the possible gain in speedup depend on the choice of adversary? Which adversaries permit efficient algorithms but don't at the same time underestimate the effects of randomized choices on the schedule.
  • Explore techniques to prove lower bounds for randomized shared memory algorithms.
  • In sequential computing, Yao's Min-Max [Yao1977a} principle is used frequently for algorithm analysis. Even though there are some ad-hoc lower bound results relying on Yao's principle for shared memory problems (e.g., \cite{GW2012a]), the general applicability of that technique is not well understood.


A related objective is to advance the research on heuristics and alternative complexity measures for shared memory algorithms. Since this research is still in its infancy, many basic questions remain to be answered. The workshop will provide a unique opportunity to shape the direction this research will take, and to solicit and disseminate ideas of leading distributed computing researchers. Possible questions that will be discussed are
  • What are suitable models in which shared memory heuristics can be analyzed? Is there a good notion of average-case complexity for distributed problems? Is something similar to Smoothed analysis possible for concurrent algorithms?
  • Develop design principles and analysis techniques for shared memory heuristics using randomized adversaries.
  • What is the complexity of fundamental shared memory problems in such models? Can we find efficient solutions to practical data-structure problems, e.g., concurrent binary search trees, using randomized adversary models?
  • (How) can we test the practical performance of heuristics, and whether theoretical models yield accurate predictions in real systems?


The workshop attempts to promote and advance new methods for analyzing and understanding shared memory algorithms. Combinatorial topology provides an exciting tool to obtain insights which seem otherwise difficult or impossible to achieve. The initial hurdle to learn the mathematical background is high, but recent advances make the framework more accessible to non-topologists [HKR2013a]. One of the workshop's objectives is to engage researchers in this research direction, and to raise awareness for recent developments. Another objective is to come to a better understanding of the tool, and to advance progress on problems that eluded researchers so far. Examples are:
  • Many impossibility results for distributed computing were established using conventional methods (i.e., not using combinatorial topology) [AE2014a:impossibility].
  • Can topology be used to explain, unify, and perhaps improve those results?
  • Explore the application of combinatorial topology to more models and problems. Computabiity of one-shot tasks has been studied in many distributed computing models, such as message-passing, synchronous, asynchronous and partially synchronous models with both crash failures and byzantine failures. However, few research results based on topology are known for other models and problems, such as long-lived tasks (as opposed to one-shot ones) and systems with multi-reader multi-writer registers.
  • Combinatorial topology tools have been applied mostly for computability, and in systems where processes can communicate directly with each other.
  • It would be interesting to study the applicability of these techniques in situations, where processes communicate through point-to-point channels defined by a graph, a very active research area pursued by other communities. Also, it would be interesting to study through topology other aspects besides computability, such as complexity and the alternatives to worst-case analysis described above, where algorithms adapt to the actual conditions of the system.
The connection between distributed computing and topology is essential, and gained the attention not only from computer scientists, but also from mathematicians. New topology questions have arisen, and there is already work by researchers in topology motivated by distributed computing questions (e.g.~[FFH2012dagstuhl,MR2955101,kozlov2012]). The workshop will provide a forum for further cross-fertilization between these two fields.

\def\DISC#1{Proc. of \#1 {DISC}} \def\PODC#1{Proc. of \#1 {PODC}} \def\SODA#1{Proc. of \#1 {SODA}} \def\JACM{J.\ of the {ACM}} \def\FOCS#1{Proc. of \#1 {FOCS}} \def\STOC#1{Proc. of \#1 {ACM} {STOC}} \def\JALG{J.\ of Alg.} \def\SIAMJC{{SIAM} J.\ on Comp.} \def\DistComp{Distr.\ Comp.} \def\TOPLAS{ACM Trans. Program. Lang. Syst.} \def\CACM{Commun. {ACM}}

Bibliography



  1. [Abr1988] K.~Abrahamson.
    On achieving consensus using a shared memory.
    In \PODC{7th}, pp. 291--302. 1988.
  2. [AA2011a] D.~Alistarh and J.~Aspnes.
    Sub-logarithmic test-and-set against a weak adversary.
    In \DISC{25th}, pp. 97--109. 2011.
  3. [AABGG2014a] D.~Alistarh, J.~Aspnes, M.~A. Bender, R.~Gelashvili, and S.~Gilbert.
    Dynamic task allocation in asynchronous shared memory.
    In \SODA{25th}, pp. 416--435. 2014.
  4. [AACGG2014a] D.~Alistarh, J.~Aspnes, K.~Censor{-}Hillel, S.~Gilbert, and R.~Guerraoui.
    Tight bounds for asynchronous renaming.
    \JACM, 61:18, 2014.
  5. [AACGZ2011a] D.~Alistarh, J.~Aspnes, K.~Censor-Hillel, S.~Gilbert, and M.~Zadimoghaddam.
    Optimal-time adaptive strong renaming, with applications to counting.
    In \PODC{30th}, pp. 239--248. 2011.
  6. [AAGW2013a} D.~Alistarh, J.~Aspnes, G.~Giakkoupis, and P.~Woelfel.
    Randomized loose renaming in {$O(\log\log n)$] time.
    In \PODC{32nd}, pp. 200--209. 2013.
  7. [AAGGG2010a] D.~Alistarh, H.~Attiya, S.~Gilbert, A.~Giurgiu, and R.~Guerraoui.
    Fast randomized test-and-set and renaming.
    In \DISC{24th}, pp. 94--108. 2010.
  8. [ABGG2012a] D.~Alistarh, M.~A. Bender, S.~Gilbert, and R.~Guerraoui.
    How to allocate tasks asynchronously.
    In \FOCS{53rd}, pp. 331--340. 2012.
  9. [ACS2014a] D.~Alistarh, K.~Censor{-}Hillel, and N.~Shavit.
    Are lock-free concurrent algorithms practically wait-free?
    In \STOC{46th}, pp. 714--723. 2014.
  10. [AAC2012a] J.~Aspnes, H.~Attiya, and K.~Censor-Hillel.
    Polylogarithmic concurrent data structures from monotone circuits.
    \JACM, 59:2, 2012.
  11. [AC2010a] J.~Aspnes and K.~Censor.
    Approximate shared-memory counting despite a strong adversary.
    ACM Transactions on Algorithms, 6:1--23, 2010.
  12. [AC2013a} J.~Aspnes and K.~Censor-Hillel.
    Atomic snapshots in {$O(\log^3 n)$] steps using randomized helping.
    In \DISC{27th}, pp. 254--268. 2013.
  13. [AH1990a] J.~Aspnes and M.~Herlihy.
    Fast randomized consensus using shared memory.
    \JALG, 11:441--461, 1990.
  14. [AE2014a:impossibility] H.~Attiya and F.~Ellen.
    Impossibility Results for Distributed Computing.
    Morgan and Claypool, 2014.
  15. [BG2011a} M.~Bender and S.~Gilbert.
    Mutual exclusion with {$O(\log^2\log n)$] amortized work.
    In \FOCS{52nd}, pp. 728--737. 2011.
  16. [CR2012a] A.~Casta{\~{n}}eda and S.~Rajsbaum.
    New combinatorial topology bounds for renaming: The upper bound.
    \JACM, 59:3, 2012.
  17. [GHHW2013a} G.~Giakkoupis, M.~Helmi, L.~Higham, and P.~Woelfel.
    An {$O(\sqrt{n})$] space bound for obstruction-free leader election.
    In \DISC{27th}, pp. 46--60. 2013.
  18. [GW2012b] G.~Giakkoupis and P.~Woelfel.
    On the time and space complexity of randomized test-and-set.
    In \PODC{31st}, pp. 19--28. 2012.
  19. [GW2012a} G.~Giakkoupis and P.~Woelfel.
    A tight {RMR] lower bound for randomized mutual exclusion.
    In \STOC{44th}, pp. 983--1002. 2012.
  20. [GW2014a} G.~Giakkoupis and P.~Woelfel.
    Randomized mutual exclusion with constant amortized {RMR] complexity on the {DSM}.
    In \FOCS{55nd}. 2014.
    To appear.
  21. [GHW2010a} W.~Golab, D.~Hendler, and P.~Woelfel.
    An {$O(1)$] {RMR}s leader election algorithm.
    \SIAMJC, 39:2726--2760, 2010.
  22. [HW2010a] D.~Hendler and P.~Woelfel.
    Adaptive randomized mutual exclusion in sub-logarithmic expected time.
    In \PODC{29th}, pp. 141--150. 2010.
  23. [HW2011a] D.~Hendler and P.~Woelfel.
    Randomized mutual exclusion with sub-logarithmic {RMR}-complexity.
    \DistComp, 24:3--19, 2011.
  24. [Her1991a] M.~Herlihy.
    Wait-free synchronization.
    \TOPLAS, 13:124--149, 1991.
  25. [HKR2013a] M.~Herlihy, D.~Kozlov, and S.~Rajsbaum.
    Distributed Computing Through Combinatorial Topology.
    Morgan Kaufmann, 2013.
  26. [kozlov2012] D.~N. Kozlov.
    Chromatic subdivision of a simplicial complex.
    Homology, Homotopy and Applications, 14:197--209, 2012.
  27. [MR2955101} D.~N. Kozlov.
    Some conjectures concerning complexity of {PL] subdivisions.
    In Proceedings of the {W}orkshop on {Geometric and {T}opological {M}ethods in {C}omputer {S}cience ({GETCO})}, volume 283 of Electron. Notes Theor. Comput. Sci., pp. 153--157. Elsevier Sci. B. V., Amsterdam, 2012.
  28. [Lev1986a] L.~A. Levin.
    Average case complete problems.
    \SIAMJC, 15:285--286, 1986.
  29. [FFH2012dagstuhl] M.~H. Lisbeth~Fajstrup, Dmitry Feichtner-Kozlov, editor.
    \emph{Applications of Combinatorial Topology to Computer Science (Dagstuhl Seminar 12121)}, volume~2 of Dagstuhl Reports. 2012.
  30. [MF2004a:heuristics] Z.~Michalewicz and D.~Fogel.
    How to solve it - modern heuristics.
    Springer, second edition, 2004.
  31. [PW2012a] A.~Pareek and P.~Woelfel.
    {RMR}-efficient randomized abortable mutual exclusion.
    In \DISC{26th}, pp. 267--281. 2012.
  32. [ST2004a] D.~A. Spielman and S.~Teng.
    Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time.
    \JACM, 51:385--463, 2004.
  33. [ST2009a] D.~A. Spielman and S.~Teng.
    Smoothed analysis: an attempt to explain the behavior of algorithms in practice.
    \CACM, 52:76--84, 2009.
  34. [Yao1977a] A.~C.-C. Yao.
    Probabilistic computations: Towards a unified measure of complexity.
    In \FOCS{17th}, pp. 222--227. 1977.