Skip to content
dybber edited this page Oct 25, 2012 · 23 revisions

As the outcome of our survey, we have to select a project to spend the remaining 4 months on. This is a list of such proposals.

All current languages we have investigated seems to implement some kind of fusion and avoid uncoalesced memory accesses (Accelerate and Nikola)

Branch divergence

In the paper "Financial Software on GPUs: Between Haskell and Fortran" presents a technique for limiting branch divergence. A project could involve implementing sequential loops in Accelerate and limit branch divergence using this technique. It could also be implemented in Nikola or in a small toy language.

Strength reduction

The paper also explains strength reduction. Same as above just for strength reduction instead of branch divergence.

GPU backend for Feldspar

We have previously talked about the possibility of a GPU backend for Feldspar, but as we have excluded Feldspar from our survey we haven't really any knowledge about how feasible this would be.

Flattening transformation

Perform flattening transformation in Accelerate or Nikola.

Survey "VectorMARK" in progress

Clone this wiki locally