SHMEM is a family of parallel programming libraries, providing one-sided, RDMA, parallel-processing interfaces for low-latency distributed-memory supercomputers. The SHMEM acronym was subsequently reverse engineered to mean "Symmetric Hierarchical MEMory”. Later it was expanded to distributed memoryparallel computer clusters, and is used as parallel programming interface or as low-level interface to build partitioned global address space systems and languages. “Libsma”, the first SHMEM library, was created by Richard Smith at Cray Research in 1993 as a set of thin interfaces to access the CRAY T3D’s inter-processor-communication hardware. SHMEM has been implemented by Cray Research, SGI, Cray Inc., Quadrics, HP, GSHMEM, IBM, QLogic, Mellanox, Universities of Houston and Florida; there is also open-source OpenSHMEM. SHMEM laid the foundations for low-latency one-sided communication. After its use on the CRAY T3E, its popularity waned as few machines could deliver the near-microsecond latencies necessary to maintain efficiency for its hallmark individual-word communication. With the advent of popular sub-microsecond interconnects, SHMEM has been used to address the necessity of hyper-efficient, portable, parallel-communication methods for exascale computing. Programs written using SHMEM can be started on several computers, connected together with some high-performance network, supported by used SHMEM library. Every computer runs a copy of a program ; each copy is called PE. PEs can ask the SHMEM library to do remote memory-access operations, like reading or writing data. Peer-to-peer operations are one-sided, which means that no active cooperation from remote thread is needed to complete the action. Operations can be done on short types like bytes or words, or on longer datatypes like arrays, sometimes evenly strided or indexed. For short datatypes, SHMEM can do atomic operations even in remote memory. Also there are two different synchronization methods: task control sync and functions to enforce memory fencing and ordering. SHMEM has several collective operations, which should be started by all PEs, like reductions, broadcast, collect. Every PE has some of its memory declared as "symmetric" segment and other memory is private. Only "shared" memory can be accessed in one-sided operation from remote PEs. Programmers can use static-memory constructs or shmalloc/shfree routines to create objects with symmetric address that span the PEs.
Typical SHMEM functions
start_pes - start N processing elements
_my_pe - ask SHMEM to return the PE identifier of current thread
shmem_barrier_all - wait until all PEs run up to barrier; then enable them to go further
shmem_put - write data of length "length" to the remote address "target" on PE with id "pe" from local address "source"
shmem_get - read data of length "length" from the remote address "source" on PE with id "pe" and save to read values into local address "target"
In first years SHMEM was accessible only on some Cray Research machines equipped with special networks, limiting library widespread and being vendor lock-in. SHMEM was not defined as standard, so there were created several incompatible variants of SHMEM libraries by other vendors. Libraries had different include file names, different management function names for starting PEs or getting current PE id, and some functions were changed or not supported. Some SHMEM routines were designed according to CRAY T3D architecture limitations, for example reductions and broadcasts could be started only on subsets of PEs with size being power of two. Variants of SHMEM libraries can run on top of any MPI library, even when a cluster has only non-rdma optimized Ethernet, however the performance will be typically worse than other enhanced networking protocols. Memory in shared region should be allocated using special functions, not with the system malloc. SHMEM is available only for C and Fortran. Many disadvantages of SHMEM have been overcome with the use of OpenSHMEM on popular sub-microsecond interconnects, driven by exascale development.