That's cool but I think the proper solution is to write a Linux kernel module that can reserve GPU RAM via DRM to create ramdisks, not create a userspace filesystem using OpenCL.<p>That would give proper caching, direct mmap support if desired, a reliable, correct and concurrent filesystem (as opposed to this author's "all of the FUSE callbacks share a mutex to ensure that only one thread is mutating the file system at a time"), etc.
On the topic of coercing bits into functioning as data storage:
harder drive ( <a href="http://tom7.org/harder/" rel="nofollow">http://tom7.org/harder/</a> )
What is the overhead on a FUSE filesystem compared to being implemented in the kernel? Could something like eBPF be used to make a faster FUSE-like filesystem driver?
Somewhat related, there is NVIDIA CUDA Direct Storage[0] which provides an API for efficient “file transfer” between GPU and local filesystem. Always wanted to give it a try but haven’t yet<p>[0]: <a href="https://docs.nvidia.com/gpudirect-storage/index.html" rel="nofollow">https://docs.nvidia.com/gpudirect-storage/index.html</a>
If you want a vramfs, why would you use GPU VRAM? CPU<->GPU copy speeds are not great.<p>I have 192GB of CPU VRAM in my desktop and that was cheap to obtain. Absolute best build decision ever.