wiki:Packages/LWIP
Notice: We have migrated to GitLab launching 2024-05-01 see here: https://gitlab.rtems.org/

Version 27 (modified by Kinsey Moore, on 09/29/22 at 17:49:54) (diff)

swap to master branch for the time being for consistency

LWIP

Light Weight Internet Protocol (LWIP)

Status: rtems-lwip is currently under development.

Active volunteers: Vijay Kumar Banerjee, Kinsey Moore

Those looking to use rtems-lwip should use the "master" branch at: https://git.rtems.org/rtems-lwip

Current status of rtems-lwip:

  • Tested with telnetd01 test on Beaglebone Black.
  • Tested with ZynqMP BSP on ZCU102 and QEMU
  • TMS570 drivers added, but not tested yet.
  • Work-in-Progress: STM32 support is being added and it'll be pushed to the devel branch once tested with hardware.
  • Currently on lwIP 2.1.3

Source origin repositories are documented in the ORIGIN.* files in the rtems-lwip repository.

RTEMS Source Builder configuration files lwip-1-1.cfg and lwip-1.cfg are located in next RSB locations

https://git.rtems.org/rtems-source-builder/tree/rtems/config/net

https://git.rtems.org/rtems-source-builder/tree/source-builder/config/lwip-1.cfg

Update the RTEMS port of LWIP. Provide reproducible test cases so we can be sure it stays in working condition. See discussion on #1712.

LwIP Integration with RTEMS File Descriptors

Many POSIX applications expect that sockets and file descriptors are allocated from same space and can be interchanged in many functions (read, write, close etc.) which allows to use fdopen, FILE stream, printf etc. on sockets. Original RTEMS BSD based stack is fully integrated as well as new BSD based stack. There are some pointers and thoughts about this work

It is necessary to write bridge between LwIP connections representation and RTEMS file descriptors.

Historical Example

Template for this work can be found in

https://git.rtems.org/rtems-net-legacy/tree/rtems/rtems_syscall.c

FDs for socket are allocated in function

int
socket (int domain, int type, int protocol)

        error = socreate(domain, &so, type, protocol, NULL);
        if (error == 0) {
                fd = rtems_bsdnet_makeFdForSocket (so);
                if (fd < 0)
                        soclose (so);

which calls function rtems_bsdnet_makeFdForSocket(). That function can be copied unchanged for LwIP bridge.

Current Working Method

One option is to map functions to lwip_socket, lwip_recive etc. and keep transformation from RTEMS FD number to LwIP FD number. LwIP FD number can be kept in iop->data1 member type. See function rtems_bsdnet_fdToSocket() in original BSD integrated RTEMS networking. rtems_libio_t structure is defined in

https://git.rtems.org/rtems/tree/cpukit/include/rtems/libio.h

typedef struct rtems_libio_tt rtems_libio_t;

struct rtems_libio_tt {
  rtems_driver_name_t                    *driver;
  off_t                                   offset;    /* current offset into file */
  uint32_t                                flags;
  rtems_filesystem_location_info_t        pathinfo;
  uint32_t                                data0;     /* private to "driver" */
  void                                   *data1;     /* ... */
};

The same header file defines structure specifying file operations as well

struct _rtems_filesystem_file_handlers_r {
  rtems_filesystem_open_t open_h;
  rtems_filesystem_close_t close_h;
  rtems_filesystem_read_t read_h;
  rtems_filesystem_write_t write_h;
  rtems_filesystem_ioctl_t ioctl_h;
  rtems_filesystem_lseek_t lseek_h;
  rtems_filesystem_fstat_t fstat_h;
  rtems_filesystem_ftruncate_t ftruncate_h;
  rtems_filesystem_fsync_t fsync_h;
  rtems_filesystem_fdatasync_t fdatasync_h;
  rtems_filesystem_fcntl_t fcntl_h;
  rtems_filesystem_poll_t poll_h;
  rtems_filesystem_kqfilter_t kqfilter_h;
  rtems_filesystem_readv_t readv_h;
  rtems_filesystem_writev_t writev_h;
};

This simple way how to map RTEMS descriptors to LwIP ones can be found in file

https://github.com/ppisa/rtems-devel/blob/master/rtems-omk-template/applwiptest/rtems_lwip_io.c

lwIP has its own set of network and socket headers that have been bypassed to use the Newlib headers that RTEMS and LibBSD depend on to provide consistent operation with ported code.

This solution does not provide non-block support, but RTEMS provided telnet daemon runs against this implementation. Next object files were removed from librtemscpu.a to ensure that LWIP provided ones are used.

  ar d librtemscpu.a in_proto.o ip_fw.o ip_icmp.o ip_input.o \
                     main_netstats.o main_ping.o rtems_syscall.o \
                     tcp_debug.o tcp_input.o tcp_output.o tcp_subr.o \
                     tcp_timer.o tcp_usrreq.o udp_usrreq.o

But simple solution has disadvantage that there are consulted two tables (FD_RTEMS -> RTEMS iop data, FD_LwIP to LwIP connection state) in each operation function. even worse both tables and objects have to be separately allocated and freed. Better solution is to not use LwIP provided FD allocation layer and use directly API working with connection state through structure pointers to struct netconn. LwIP FD API is, in fact, based on this lower level API. struct netconn based API implementation can be found in the file

http://git.savannah.gnu.org/cgit/lwip.git/tree/src/api/api_lib.c

The mapping of the new RTEMS BSD stack to RTEMS CPUkit FD handles can be found in the following file as another example:

https://git.rtems.org/rtems-libbsd/tree/freebsd/sys/kern/sys_socket.c

Locking

As for the locking, I expect that for beginning there has to be global lock at start of each function pointed by _rtems_filesystem_file_handlers_r operations. It is possible to relax this to per struct netconn separate semaphores later. But for small single CPU targets this would cause higher overhead than global networking lock and for big systems new BSD stack is used anyway.

As for LwIP architecture, it uses quite complex mechanism to queue all call to the core to single working thread. Queuing is quite complex, allocates structure for each pending request and these requests are for most operations processes fully sequentially. Even authors of LwIP see that there is no need to queue requests that are served sequentially and there is introduced new experimental option LWIP_TCPIP_CORE_LOCKING which allows to protect LwIP core by single mutex/semaphore. This has advantage that priority inheritance/boosting would work between tasks and much less resources are used by LwIP. But for start I suggest to play with default queue LwIP mechanism for now.