was successfully added to your cart.

DirectFS Expedites File System Access to Fusion ioMemory; Interview with Fusion-io Sr Director of Product Mgt Brent Compton Part III

By May 23, 2012DCIG, Storage Systems

Fusion-io is in the process of luring developers to its ioMemory platform with some impressive new features. Already we have looked at the new APIs the Fusion-io ioMemory SDK kit offers for block IO and key value stores.  In today’s conversation, Brent Compton, Fusion-io’s Senior Director of Product Management, describes new native filesystem service that Fusion-io exposes in its ioMemory SDK kit and some of the technical aspects as to how it works.

Ben: The first area was the direct-access IO API. The second included native key-value store API libraries. What’s next?

Brent: The third area is what Fusion-io refers to as directFS, which is a native file access layer. Again, we are not really inventing a whole bunch of new technology from scratch. We are harnessing what has been inside of our ioMemory flash translation layer for years but has not yet been exposed through APIs to outside developers.

The underlying native characteristics being harnessed through this directFS native file access layer are logical block to physical block mapping along with block allocation and recycling.   These functions are inherently part of the ioMemory subsystem.

These functions, of course, constitute a lot of the heavy lifting occurring inside of today’s file systems. We have a significant portion of these functions already at work in enterprise grade deployments inside our ioMemory subsystem.

Extending these functions to provide a POSIX compliant native file access layer allows people to interact with ioMemory natively through file access semantics. They can then create and work with files in various different ways while bypassing the standard file system and operating system layers, both of which were built and tuned for rotating disks.

Finally, in addition to these direct-access IO APIs and native file-system services, the ioMemory SDK also contains a whole series of configuration monitoring APIs as well.

Ben: A technical question for you. When the write happens, you mentioned that there is an “A-prime.” You are writing to a new block, not to the original block, so that you are not wearing out the disk. Is that correct?

Brent:
Correct. It’s all part of the wear leveling algorithms that are in place inside of the flash translation layer. The log structured writes, or log append writes, are a commonly used mechanism. It’s one part of what makes the ioMemory flash translation layer so powerful. We turn flash media into enterprise grade quality through methods like wear leveling.

Ben: Some of your new primitives are similar, and in some cases, a replacement for mmap. Just to be clear, the program code itself is still running in RAM.

Brent: Yes, the compiled application code is still running in RAM and the host CPU is managing the process virtual address space as usual. The most succinct way to describe running natively on ioMemory is that the application, when it invokes any of these APIs, is interacting directly with the ioMemory device instead of going through the various block IO layers provided by the operating system.

Then, to your point, one of those mechanisms for interacting with ioMemory is memory- access semantics. Checkpointed Memory takes the concept of anonymous mmap and extends it by giving it some durability properties such as guaranteed persistence properties.

That’s why from the beginning Fusion-io has always described itself as ioMemory, and we’ve always gotten a little frustrated when the industry has lumped us in with SSDs. From the beginning our architecture has always been constructed to allow this hybrid memory / storage interaction.

Now with the ioMemory SDK, we are able to illustrate more clearly why our architecture is different. SSD vendors that are way on the other end of the traditional SSD spectrum — must plumb application I/O through the virtual file system layer, the file system, the buffer cache layer, the kernel block layer, the SAS or SATA protocol, across the wire, through a RAID controller, before reaching the SSD.

They are just in the wrong place architecturally to provide some of these native access capabilities.

Ben: As an example, I can store objects directly into that space, there’s no exporting to XML to save the state of your object, correct? I simply store it directly in ioMemory. The next time I load the program it is just sitting there, right?


Brent:
Exactly, that is done through the use of the auto commit memory API.

In Part I of our interview series, we discussed how the Fusion-io SDK kit will help to unleash the next gen properties of flash.

In Part 2
of this interview series Brent will continue to discuss the primitives
that developers will have access to including atomic multi-block writes.
We also discuss how familiar the API will be to developers.


In the fourth and final blog entry in this series Brent and I discuss the semantics of using the API in the C language and how Fusion-io is leveraging its early access partnerships.

image_pdfimage_print
Ben Maas

About Ben Maas

Senior Analyst for DCIG. Linux Kool-Aid Drinker. Twins Groupie. Fascinated by anything with silicon wafers.

Leave a Reply