Changes between Version 8 and Version 9 of Packages/LWIP


Ignore:
Timestamp:
Mar 20, 2016, 12:32:55 AM (3 years ago)
Author:
Pavel Pisa
Comment:

LwIP use RTEMS native file descriptors thoughts

Legend:

Unmodified
Added
Removed
Modified
  • Packages/LWIP

    v8 v9  
    77
    88''Status:''' No active volunteers.
     9
     10Project mainline repository: git://git.savannah.nongnu.org/lwip.git
    911
    1012Current Port: ftp://ftp.rtems.org/pub/rtems/people/joel/rtems-lwip/
     
    20221. Provide documentation and examples that run on at least [http://www.rtems.org/wiki/index.php/QEMU qemu] so others can provide feedback.
    21231. Ideally use the standard BSD NIC drivers if at all technically possible so when you write a driver for one TCP/IP stack, you automatically get it for both stacks in RTEMS.
     24
     25Some other LwIP porting attempt with TMS570, LCP17xx drivers with RTEMS, POSIX and systemless adaptation layers is available from uLAN project repository.
     26
     27https://sourceforge.net/p/ulan/lwip-omk
     28
     29This project build is based on OMK system but for use in RTEMS it should be changed to something matching RTEMS standard build system (even that configurability and multiple target systems support is lost but drivers selection should be kept somehow. One option is to move actual drivers to RTEMS BSP directories but sometimes they can be reused between more architectures so may it be left them in LwIP support tree is better option.
     30
     31== LwIP Integration with RTEMS File Descriptors ==
     32
     33Many POSIX applications expect that sockets and file descriptors are allocated from same space and can be interchanged in many functions (read, write, close etc.) which allows to use fdopen, FILE stream, printf etc. on sockets. Original RTEMS BSD based stack is fully integrated as well as new BSD based stack. There are some pointers and thoughts about this work
     34
     35It is necessary to write bridge between LwIP connections representation and RTEMS file descriptors. Template for this work can be found in
     36
     37https://git.rtems.org/rtems/tree/cpukit/libnetworking/rtems/rtems_syscall.c
     38
     39FDs for socket are allocated in function
     40
     41{{{
     42int
     43socket (int domain, int type, int protocol)
     44
     45        error = socreate(domain, &so, type, protocol, NULL);
     46        if (error == 0) {
     47                fd = rtems_bsdnet_makeFdForSocket (so);
     48                if (fd < 0)
     49                        soclose (so);
     50}}}
     51
     52which calls function rtems_bsdnet_makeFdForSocket(). That function can be copied unchanged for LwIP bridge.
     53
     54One option is to map functions to lwip_socket, lwip_recive etc. and keep transformation from RTEMS FD number to LwIP FD number. LwIP FD number can be kept in iop->data1 member type. See function rtems_bsdnet_fdToSocket() in original BSD integrated RTEMS networking. rtems_libio_t structure is defined in
     55
     56https://git.rtems.org/rtems/tree/cpukit/libcsupport/include/rtems/libio.h
     57
     58{{{
     59typedef struct rtems_libio_tt rtems_libio_t;
     60
     61struct rtems_libio_tt {
     62  rtems_driver_name_t                    *driver;
     63  off_t                                   offset;    /* current offset into file */
     64  uint32_t                                flags;
     65  rtems_filesystem_location_info_t        pathinfo;
     66  uint32_t                                data0;     /* private to "driver" */
     67  void                                   *data1;     /* ... */
     68};
     69}}}
     70
     71The same header file defines structure specifying file operations as well
     72
     73{{{
     74struct _rtems_filesystem_file_handlers_r {
     75  rtems_filesystem_open_t open_h;
     76  rtems_filesystem_close_t close_h;
     77  rtems_filesystem_read_t read_h;
     78  rtems_filesystem_write_t write_h;
     79  rtems_filesystem_ioctl_t ioctl_h;
     80  rtems_filesystem_lseek_t lseek_h;
     81  rtems_filesystem_fstat_t fstat_h;
     82  rtems_filesystem_ftruncate_t ftruncate_h;
     83  rtems_filesystem_fsync_t fsync_h;
     84  rtems_filesystem_fdatasync_t fdatasync_h;
     85  rtems_filesystem_fcntl_t fcntl_h;
     86  rtems_filesystem_poll_t poll_h;
     87  rtems_filesystem_kqfilter_t kqfilter_h;
     88  rtems_filesystem_readv_t readv_h;
     89  rtems_filesystem_writev_t writev_h;
     90};
     91}}}
     92
     93
     94But simple solution has disadvantage that there are consulted two tables (FD_RTEMS -> RTEMS iop data, FD_LwIP to LwIP connection state) in each operation function. even worse both tables and objects have to be separately allocated and freed. Better solution is to not use LwIP provided FD allocation layer and use directly API working with connection state through structure pointers to struct netconn. LwIP FD API
     95is in the fact based on this lower level API. struct netconn based API implementation can be found in the file
     96
     97http://git.savannah.gnu.org/cgit/lwip.git/tree/src/api/api_lib.c
     98
     99Example, how to use this API can be found in ReactOS operating system LwIP integration which maps LwIP netconn API to Windows handles API
     100
     101https://git.reactos.org/?p=reactos.git;a=blob;f=reactos/lib/drivers/lwip/src/rostcp.c
     102
     103{{{
     104LibTCPSendCallback(void *arg)
     105}}}
     106{{{
     107LibTCPSend(PCONNECTION_ENDPOINT Connection, void *const dataptr, const u16_t len, u32_t *sent, const int safe)
     108}}}
     109
     110The other part of ReactOS LwIP integration can be found there
     111
     112https://git.reactos.org/?p=reactos.git;a=blob;f=reactos/lib/drivers/lwip/src/api/tcpip.c
     113
     114{{{
     115tcpip_callback_with_block(tcpip_callback_fn function, void *ctx, u8_t block)
     116}}}
     117
     118The mapping of the new RTEMS BSD stack to RTEMS CPUkit FD handles can be found in next file as another example
     119
     120https://git.rtems.org/rtems-libbsd/tree/freebsd/sys/kern/sys_socket.c
     121
     122As for the locking, I expect that for beginning there has to be global lock at start of each function pointed by _rtems_filesystem_file_handlers_r operations. It is possible to relax this to per struct netconn separate semaphores later. But for small single CPU targets this would cause higher overhead than global networking lock and for big systems new BSD stack is used anyway.
     123
     124As for LwIP architecture, it uses quite complex mechanism to queue all call to the core to single working thread. Queuing is quite complex, allocates structure for each pending request and these requests are for most operations processes fully sequentially. Even authors of LwIP see that there is no need to queue requests that are served sequentially and there is introduced new experimental option LWIP_TCPIP_CORE_LOCKING which allows to protect LwIP core by single mutex/semaphore. This has advantage that priority inheritance/boosting would work between tasks and much less resources are used by LwIP. But for start I suggest to play with default queue LwIP mechanism for now.