#1848 closed enhancement

Add driver and file system sync support.

Reported by: Chris Johns Owned by: Chris Johns
Priority: normal Milestone: 4.11
Component: fs Version: 4.11
Severity: minor Keywords:
Cc: sebastian.huber@… Blocked By:
Blocking:

Description (last modified by Chris Johns)

Currently there is no API in RTEMS to sync a libblock device given the dev_t details obtains via a stat of the /dev entry. This PR documents a solution that allows a user to issue a sync call on a device and have the files open on that device, the file system, and libblock flushed. The RFS is given as an example file system.

The RFS has a complex layering of meta-data management to help it achieve the performance it does. The flow of data is:

user------------------[sync]---------------------+ APP

-

+-> <fd> | LIBCSUPPORT

| |

<sync>-+ |

| | | |
| +->[iop]<-+ |

--

| +>[handle] | RFS
| | |
| +>[shared] |
| | |
| +>[inode] |

<sync> | |

| +->[buffer handle] |
| | |
| +->[recent list]<-+ |

--

| | | | LIBBLOCK
| +->[AVL tree]<-+--+ |
| | | |
| +->[swap out]-+ |

--

| | | DRIVER
| +->[driver]<--+
| |
+------------------------------------+

When a buffer is release it is held in the recent list in the buffering layer of the RFS until pressure releases it to libblock. Libblock will hold that buffer for a configured period of time until it passes it to the swap out thread which passes it to the driver. If the buffer is requested before it passes to the driver it is passed back to the file system. I do not think the timer is reset so the next time the file releases the buffer it should be a shorter time to get it queued onto a swap out worker thread. Therefore constant use of a buffer keeps it from the driver as the user take priority.

The RFS maintains the directory inode data in the shared structure until you close the file. The block bit maps that defines the file mapping on the disk are not held by the RFS so should be ok and they are slow moving unless you are writing huge amounts of data and in that case you tend to fill the bit map quickly.

The conflicting requirements are the need to keep performance high for the case of many byte writes to a file and the ability to snapshot the data.

A rewrite of the libcsupport for files would provide reference counting on iop type data as well as locking. I have drawn a user sync call to a device. The driver would call back into the file system for that device and the file system would call flush in the libcsupport layer for each file it has opened, flush internally then sync libblock. The file system would have to hook the driver directly. I think this can be done in the generic layer of the driver.

Another part of the solution is to enable a snapshot feature in libblock for file systems. Here the file system is asked to snapshot its meta-data state. The RFS would need to have add the ability to snapshot the inode data and to commit it to disk as well as flush the recent queue. This is not a big change. We would then need to change the swapout thread in libblock to call the file systems at a configurable rate to snapshot the registered file system.

Change History (4)

comment:1 Changed on 03/13/12 at 11:53:49 by Sebastian Huber

Cc: Sebastian Huber added

comment:2 Changed on 11/20/14 at 04:39:41 by Chris Johns

Description: modified (diff)
Resolution: fixed
Status: newclosed

Fixed in 4.11.

comment:3 Changed on 11/20/14 at 04:41:05 by Chris Johns

Milestone: Not Assigned4.11

All fixed and close on HEAD.

comment:4 Changed on 11/24/14 at 18:58:28 by Gedare Bloom

Version: HEAD4.11

Replace Version=HEAD with Version=4.11 for the tickets with Milestone >= 4.11

Note: See TracTickets for help on using tickets.