Computational RAM


Computational RAM or C-RAM is random-access memory with processing elements integrated on the same chip. This enables C-RAM to be used as a SIMD computer. It also can be used to more efficiently use memory bandwidth within a memory chip.

Overview

The most influential implementations of computational RAM came from The Berkeley IRAM Project. Vector IRAM combines DRAM with a vector processor integrated on the same chip.
Reconfigurable Architecture DRAM is DRAM with reconfigurable computing FPGA logic elements integrated on the same chip.
SimpleScalar simulations show that RADram can give orders of magnitude better performance on some problems than traditional DRAM.
Some embarrassingly parallel computational problems are already limited by the von Neumann bottleneck between the CPU and the DRAM.
Some researchers expect that, for the same total cost, a machine built from computational RAM will run orders of magnitude faster than a traditional general-purpose computer on these kinds of problems.
As of 2011, the "DRAM process" and the "CPU process" is distinct enough that there are three approaches to computational RAM:
Some CPUs designed to be built on a DRAM process technology include
The Berkeley IRAM Project, TOMI Technology
and the AT&T DSP1.
Because a memory bus to off-chip memory has many times the capacitance of an on-chip memory bus, a system with separate DRAM and CPU chips can have several times the energy consumption of an IRAM system with the same computer performance.
Because computational DRAM is expected to run hotter than traditional DRAM,
and increased chip temperatures result in faster charge leakage from the DRAM storage cells,
computational DRAM is expected to require more frequent DRAM refresh.

Processor-in-/near-memory

A processor-in-/near-memory refers to a computer processor tightly coupled to memory, generally on the same silicon chip.
The chief goal of merging the processing and memory components in this way is to reduce memory latency and increase bandwidth. Alternatively reducing the distance that data needs to be moved reduces the power requirements of a system. Much of the complexity in current processors stems from strategies to deal with avoiding memory stalls.

Examples

In the 1980s, a tiny CPU that executed FORTH was fabricated into a DRAM chip to improve PUSH and POP. FORTH is a stack-oriented programming language and this improved its efficiency.
The transputer also had large on chip memory given that it was made in the early 1980s making it essentially a processor-in-memory.
Notable PIM projects include the Berkeley IRAM project at the University of California, Berkeley project and the University of Notre Dame PIM effort.