wiki:Packages/LWIP

LWIP

Light Weight Internet Protocol (LWIP)

Status: No active volunteers.

Project mainline repository: git://git.savannah.nongnu.org/lwip.git

RTEMS Source Builder configuration files lwip-1-1.cfg and lwip-1.cfg are located in next RSB locations

https://git.rtems.org/rtems-source-builder/tree/rtems/config/net

https://git.rtems.org/rtems-source-builder/tree/source-builder/config/lwip-1.cfg

Update the RTEMS port of LWIP. Provide reproducible test cases so we can be sure it stays in working condition. See discussion on PR1712?

A goal of this effort is to be able to use the standard TCP/IP NIC drivers already in RTEMS. If this is feasible, then a user could switch between the full-featured BSD TCP/IP stack in RTEMS or the LWIP stack to save memory at the expense of reduced functionality.

The work would have to do :

  1. start with current LWIP source
  2. update the port for current RTEMS and current LWIP
  3. make the port portable across CPU architectures and BSPs (This is usually not as hard as it sounds.)
  4. Provide documentation and examples that run on at least qemu so others can provide feedback.
  5. Ideally use the standard BSD NIC drivers if at all technically possible so when you write a driver for one TCP/IP stack, you automatically get it for both stacks in RTEMS.

Some other LwIP porting attempt with TMS570, LCP17xx drivers with RTEMS, POSIX and systemless adaptation layers is available from uLAN project repository.

https://sourceforge.net/p/ulan/lwip-omk

LwIP arch support for RTEMS in lwip-omk port

https://sourceforge.net/p/ulan/lwip-omk/ci/master/tree/ports/os/rtems/

This project build is based on OMK system but for use in RTEMS it should be changed to something matching RTEMS standard build system (even that configurability and multiple target systems support is lost but drivers selection should be kept somehow. One option is to move actual drivers to RTEMS BSP directories but sometimes they can be reused between more architectures so may it be left them in LwIP support tree is better option.

Other RTEMS LwIP port by Ragunath for Beagle Bone board

http://ragustechblog.blogspot.cz/2015/08/rtems-beaglebone-black-with-lwip.html

git://github.com/ragunath3252/lwip-app.git

git://github.com/ragunath3252/lwip-nodrv

Old 2008 port: ftp://ftp.rtems.org/pub/rtems/people/joel/rtems-lwip/

LwIP Integration with RTEMS File Descriptors

Many POSIX applications expect that sockets and file descriptors are allocated from same space and can be interchanged in many functions (read, write, close etc.) which allows to use fdopen, FILE stream, printf etc. on sockets. Original RTEMS BSD based stack is fully integrated as well as new BSD based stack. There are some pointers and thoughts about this work

It is necessary to write bridge between LwIP connections representation and RTEMS file descriptors. Template for this work can be found in

https://git.rtems.org/rtems/tree/cpukit/libnetworking/rtems/rtems_syscall.c

FDs for socket are allocated in function

int
socket (int domain, int type, int protocol)

        error = socreate(domain, &so, type, protocol, NULL);
        if (error == 0) {
                fd = rtems_bsdnet_makeFdForSocket (so);
                if (fd < 0)
                        soclose (so);

which calls function rtems_bsdnet_makeFdForSocket(). That function can be copied unchanged for LwIP bridge.

One option is to map functions to lwip_socket, lwip_recive etc. and keep transformation from RTEMS FD number to LwIP FD number. LwIP FD number can be kept in iop->data1 member type. See function rtems_bsdnet_fdToSocket() in original BSD integrated RTEMS networking. rtems_libio_t structure is defined in

https://git.rtems.org/rtems/tree/cpukit/include/rtems/libio.h

typedef struct rtems_libio_tt rtems_libio_t;

struct rtems_libio_tt {
  rtems_driver_name_t                    *driver;
  off_t                                   offset;    /* current offset into file */
  uint32_t                                flags;
  rtems_filesystem_location_info_t        pathinfo;
  uint32_t                                data0;     /* private to "driver" */
  void                                   *data1;     /* ... */
};

The same header file defines structure specifying file operations as well

struct _rtems_filesystem_file_handlers_r {
  rtems_filesystem_open_t open_h;
  rtems_filesystem_close_t close_h;
  rtems_filesystem_read_t read_h;
  rtems_filesystem_write_t write_h;
  rtems_filesystem_ioctl_t ioctl_h;
  rtems_filesystem_lseek_t lseek_h;
  rtems_filesystem_fstat_t fstat_h;
  rtems_filesystem_ftruncate_t ftruncate_h;
  rtems_filesystem_fsync_t fsync_h;
  rtems_filesystem_fdatasync_t fdatasync_h;
  rtems_filesystem_fcntl_t fcntl_h;
  rtems_filesystem_poll_t poll_h;
  rtems_filesystem_kqfilter_t kqfilter_h;
  rtems_filesystem_readv_t readv_h;
  rtems_filesystem_writev_t writev_h;
};

This simple way how to map RTEMS descriptors to LwIP ones can be found in file

https://github.com/ppisa/rtems-devel/blob/master/rtems-omk-template/applwiptest/rtems_lwip_io.c

Because LwIP sockaddr type can in general differ and AF_XXX and PF_XXX differ between NewLib? and LwIP there is implemented layer to translate NewLib? application types and defines to from/to LwIP ones

https://github.com/ppisa/rtems-devel/blob/master/rtems-omk-template/applwiptest/rtems_lwip_int.h

https://github.com/ppisa/rtems-devel/blob/master/rtems-omk-template/applwiptest/rtems_lwip_sysdefs.c

This proof of concept solution does not provide select integration and non-block support. But RTEMS provided telnet daemon runs against this implementation. Next object files were removed from librtemscpu.a to ensure that LWIP provided ones are used.

  ar d librtemscpu.a in_proto.o ip_fw.o ip_icmp.o ip_input.o \
                     main_netstats.o main_ping.o rtems_syscall.o \
                     tcp_debug.o tcp_input.o tcp_output.o tcp_subr.o \
                     tcp_timer.o tcp_usrreq.o udp_usrreq.o

But simple solution has disadvantage that there are consulted two tables (FD_RTEMS -> RTEMS iop data, FD_LwIP to LwIP connection state) in each operation function. even worse both tables and objects have to be separately allocated and freed. Better solution is to not use LwIP provided FD allocation layer and use directly API working with connection state through structure pointers to struct netconn. LwIP FD API is in the fact based on this lower level API. struct netconn based API implementation can be found in the file

http://git.savannah.gnu.org/cgit/lwip.git/tree/src/api/api_lib.c

Example, how to use this API can be found in ReactOS operating system LwIP integration which maps LwIP netconn API to Windows handles API

https://git.reactos.org/?p=reactos.git;a=blob;f=sdk/lib/drivers/lwip/src/rostcp.c

LibTCPSendCallback(void *arg)
LibTCPSend(PCONNECTION_ENDPOINT Connection, void *const dataptr, const u16_t len, u32_t *sent, const int safe)

The other part of ReactOS LwIP integration can be found there

https://git.reactos.org/?p=reactos.git;a=blob;f=sdk/lib/drivers/lwip/src/api/tcpip.c

tcpip_callback_with_block(tcpip_callback_fn function, void *ctx, u8_t block)

The mapping of the new RTEMS BSD stack to RTEMS CPUkit FD handles can be found in next file as another example

https://git.rtems.org/rtems-libbsd/tree/freebsd/sys/kern/sys_socket.c

As for the locking, I expect that for beginning there has to be global lock at start of each function pointed by _rtems_filesystem_file_handlers_r operations. It is possible to relax this to per struct netconn separate semaphores later. But for small single CPU targets this would cause higher overhead than global networking lock and for big systems new BSD stack is used anyway.

As for LwIP architecture, it uses quite complex mechanism to queue all call to the core to single working thread. Queuing is quite complex, allocates structure for each pending request and these requests are for most operations processes fully sequentially. Even authors of LwIP see that there is no need to queue requests that are served sequentially and there is introduced new experimental option LWIP_TCPIP_CORE_LOCKING which allows to protect LwIP core by single mutex/semaphore. This has advantage that priority inheritance/boosting would work between tasks and much less resources are used by LwIP. But for start I suggest to play with default queue LwIP mechanism for now.

Last modified on Nov 19, 2018 at 8:41:35 PM Last modified on Nov 19, 2018, 8:41:35 PM