@ RICE
@ Instructor: Vivek Sarkar
@ Course Website
Parallel, concurrent, and distributed programming underlies software in multiple domains, ranging from biomedical research to financial services. This specialization is intended for anyone with a basic knowledge of sequential programming in Java, who is motivated to learn how to write parallel, concurrent and distributed programs. Through a collection of three courses (which may be taken in any order or separately), you will learn foundational topics in Parallelism, Concurrency, and Distribution. These courses will prepare you for multithreaded and distributed programming for a wide range of computer platforms, from mobile devices to cloud computing servers.
This course teaches learners (industry professionals and students) the fundamental concepts of parallel programming in the context of Java 8. Parallel programming enables developers to use multicore computers to make their applications run faster by using multiple processors at the same time. By the end of this course, you will learn how to use popular parallel Java frameworks (such as ForkJoin, Stream, and Phaser) to write parallel programs for a wide range of multicore platforms including servers, desktops, or mobile devices, while also learning about their theoretical foundations including computation graphs, ideal parallelism, parallel speedup, Amdahl's Law, data races, and determinism.
The desired learning outcomes of this course are as follows:
• Theory of parallelism: computation graphs, work, span, ideal parallelism, parallel speedup, Amdahl's Law, data races, and determinism
• Task parallelism using Java’s ForkJoin framework
• Functional parallelism using Java’s Future and Stream frameworks
• Loop-level parallelism with extensions for barriers and iteration grouping (chunking)
• Dataflow parallelism using the Phaser framework and data-driven tasks
Project Index | Detailed Requirements | Quick Link to My Solution |
---|---|---|
Project 1 | Reciprocal-Array-Sum using the Java Fork/Join Framework | miniproject_1 |
Project 2 | Analyzing Student Statistics Using Java Parallel Streams | miniproject_2 |
Project 3 | Parallelizing Matrix-Matrix Multiply Using Loop Parallelism | miniproject_3 |
Project 4 | Using Phasers to Optimize Data-Parallel Applications | miniproject_4 |
This course teaches learners (industry professionals and students) the fundamental concepts of concurrent programming in the context of Java 8. Concurrent programming enables developers to efficiently and correctly mediate the use of shared resources in parallel programs. By the end of this course, you will learn how to use basic concurrency constructs in Java such as threads, locks, critical sections, atomic variables, isolation, actors, optimistic concurrency and concurrent collections, as well as their theoretical foundations (e.g., progress guarantees, deadlock, livelock, starvation, linearizability).
The desired learning outcomes of this course are as follows:
• Concurrency theory: progress guarantees, deadlock, livelock, starvation, linearizability
• Use of threads and structured/unstructured locks in Java
• Atomic variables and isolation
• Optimistic concurrency and concurrent collections in Java (e.g., concurrent queues, concurrent hashmaps)
• Actor model in Java
Project Index | Detailed Requirements | Quick Link to My Solution |
---|---|---|
Project 1 | Locking and Synchronization | miniproject_1 |
Project 2 | Global and Object-Based Isolation | miniproject_2 |
Project 3 | Sieve of Eratosthenes Using Actor Parallelism | miniproject_3 |
Project 4 | Parallelization of Boruvka's Minimum Spanning Tree Algorithm | miniproject_4 |
This course teaches learners (industry professionals and students) the fundamental concepts of Distributed Programming in the context of Java 8. Distributed programming enables developers to use multiple nodes in a data center to increase throughput and/or reduce latency of selected applications. By the end of this course, you will learn how to use popular distributed programming frameworks for Java programs, including Hadoop, Spark, Sockets, Remote Method Invocation (RMI), Multicast Sockets, Kafka, Message Passing Interface (MPI), as well as different approaches to combine distribution with multithreading.
The desired learning outcomes of this course are as follows:
• Distributed map-reduce programming in Java using the Hadoop and Spark frameworks
• Client-server programming using Java's Socket and Remote Method Invocation (RMI) interfaces
• Message-passing programming in Java using the Message Passing Interface (MPI)
• Approaches to combine distribution with multithreading, including processes and threads, distributed actors, and reactive programming
Project Index | Detailed Requirements | Quick Link to My Solution |
---|---|---|
Project 1 | Page Rank with Spark | miniproject_1 |
Project 2 | File Server | miniproject_2 |
Project 3 | Matrix Multiply in MPI | miniproject_3 |
Project 4 | Multi-Threaded File Server | miniproject_4 |