Coarray Fortran
Coarray Fortran, formerly known as F--, started as an extension of Fortran 95/2003 for parallel processing created by Robert Numrich and John Reid in the 1990s. The Fortran 2008 standard now includes coarrays, as decided at the May 2005 meeting of the ISO Fortran Committee; the syntax in the Fortran 2008 standard is slightly different from the original CAF proposal.
A CAF program is interpreted as if it were replicated a number of times and all copies were executed asynchronously. Each copy has its own set of data objects and is termed an image. The array syntax of Fortran is extended with additional trailing subscripts in square brackets to provide a concise representation of references to data that is spread across images.
The CAF extension was implemented in some Fortran compilers such as those from Cray. Since the inclusion of coarrays in the Fortran 2008 standard, the number of implementations is growing. The first open-source compiler which implemented coarrays as specified in the Fortran 2008 standard for Linux architectures is G95. Currently, GNU Fortran provides wide coverage of Fortran's coarray features in single- and multi-image configuration. Another implementation of coarrays and related parallel extensions from Fortran 2008 is available in the OpenUH compiler developed at the University of Houston.
Implementation in compilers
CAF is often implemented on top of a Message Passing Interface library for portability. Some implementations, such as the ones available in the GNU Fortran and OpenUH compilers, may run on top of other low-level layers designed for supporting partitioned global address space languages.Examples
A simple example is given below. CAF is used in CGPACK, an open source package for simulating polycrystalline materials developed at the University of Bristol.program Hello_World
implicit none
integer :: i ! Local variable
character :: name ! scalar coarray, one "name" for each image.
! Note: "name" is the local variable while "name" accesses the
! variable in a specific image; "name" is the same as "name".
! Interact with the user on Image 1; execution for all others pass by.
if then
write 'Enter your name: '
read name
! Distribute information to other images
do i = 2, num_images
name = name
end do
end if
sync all ! Barrier to make sure the data have arrived.
! I/O from all images, executing in any order, but each record written is intact.
write 'Hello ',trim,' from image ', this_image
end program Hello_world
The program above scales poorly because the loop that distributes information executes sequentially. Writing scalable programs often requires a sophisticated understanding of parallel algorithms, a detailed knowledge of the underlying network characteristics, and special tuning for application characteristics such as the size of data transfers. For most application developers, letting the compiler or runtime library decide the best algorithm proves more robust and high-performing. Fortran 2018 will offer collective communication subroutines that empower compiler and runtime library teams to encapsulate efficient parallel algorithms for collective communication and distributed computation in a set of collective subroutines. These subroutines and other new parallel programming features are summarized in a technical specification that the Fortran standards committee has voted to incorporate into Fortran 2018. These enable the user to write a more efficient version of the above algorithm
program Hello_World
implicit none
character :: name ! scalar coarray, one "name" for each image.
! Note: "name" is the local variable while "name" accesses the
! variable in a specific image; "name" is the same as "name".
! Interact with the user on Image 1; execution for all others pass by.
if then
write 'Enter your name: '
read name
end if
! Distribute information to all images
call co_broadcast
! I/O from all images, executing in any order, but each record written is intact.
write 'Hello ',trim,' from image ', this_image
end program Hello_world
where the lack of explicit synchronization offers the potential for higher performance due to less coordination between the images. Furthermore, TS 18508 guarantees that "A transfer from an image cannot occur before the collective subroutine has been invoked on that image." This implies some partial synchronization inside co_broadcast, but could be higher performing than the "sync all" in the prior example. TS 18508 also incorporates several other new features that address issues targeted by the CAF 2.0 effort described below. Examples include teams of images and events.
An alternate perspective
In 2011, Rice University pursued an alternate vision of coarray extensions for the Fortran language. Their perspective is that the Fortran 2008 standard committee's design choices were shaped more by the desire to introduce as few modifications to the language as possible than to assemble the best set of extensions to support parallel programming. In their view, both Numrich and Reid's original design and the coarray extensions proposed for Fortran 2008 suffer from the following shortcomings:- There is no support for processor subsets; for instance, coarrays must be allocated over all images.
- The coarray extensions lack any notion of global pointers, which are essential for creating and manipulating any kind of linked data structure.
- Reliance on named critical sections for mutual exclusion hinders scalable parallelism by associating mutual exclusion with code regions rather than data objects.
- Fortran 2008's sync images statement does not provide a safe synchronization space. As a result, synchronization operations in user's code that are pending when a library call is made can interfere with synchronization in the library call.
- There are no mechanisms to avoid or tolerate latency when manipulating data on remote images.
- There is no support for collective communication.
- process subsets known as teams, which support coarrays, collective communication, and relative indexing of process images for pair-wise operations,
- topologies, which augment teams with a logical communication structure,
- dynamic allocation/deallocation of coarrays and other shared data,
- team-based coarray allocation and deallocation,
- global pointers in support of dynamic data structures,
- support for latency hiding and avoidance, and
- * asynchronous copies,
- * asynchronous collective operations, and
- * function shipping.
- enhanced support for synchronization for fine-grain control over program execution.
- * safe and scalable support for mutual exclusion, including locks and lock sets,
- * events, which provide a safe space for point-to-point synchronization,
- * cofence, which forces local completion of asynchronous operations,
- * finish, a barrier-like SPMD construct that forces completion of asynchronous operations across a team,
General