= QEMU with CAN Emulation = Actually this is a project of GSoC2013. At first I try to port a CAN driver LinCAN to RTEMS. But we don't have a common hardwate platform to test the driver, so how to test the driver become the most significant problem here. Finaly we decide to build a software environment to test the dirver. This environment is built using the open source machine emulator QEMU and SJA1000 is simulated to test the driver. For the sake of simplicity, we test it in Linux and use SocketCAN as a driver. = Environment = * Host: The host is Ubuntu-13.04 including the basic building tools and some other softwares. * Guest: This is also a Linux environment which we build from scratch. The linux kernel is [https://www.kernel.org/pub/linux/kernel/v3.0/linux-3.4.48.tar.bz2 Linux-3.4.48], the commands are statically compiled using [http://www.busybox.net/downloads/busybox-1.21.0.tar.bz2 Busybox-1.21.0] and also some other files including configure files, start scripts, etc. * Others: [http://wiki.qemu-project.org/download/qemu-1.4.2.tar.bz2 Qemu-1.4.2] = Use Cases / Testing = The best option is to implement that as simple device -chardev can,id=sja1000,port=vcan0 -device pci-can,chardev=sja1000 and use Linux mainlined SocketCAN API to connect virtual device to real CAN hardware (SocketCAN allows multiple applications access) or to SocketCAN Virtual CAN (vcan - TCP/IP lo equivalent). This is straightforward and would result in minimal overhead and latency. The OS will be Linux and the current CAN driver to test will be Socket CAN (as it's maintlined into Linux)= Excerpts from the email chain = '''Stefan Weil''': PCI (and USB if they were supported with LinCAN) CAN controller boards could also be used with x86, so QEMU (with KVM) would be much faster and use lessresources than an ARM system emulation. So one of the PCI controllers might be the best choice. Select one with good documentation, complete implementation (on the RTEMS / LinCAN side) and small source code (which is typically an indicator for the complexity - a large complex driver typically also needs a complex emulation in QEMU).= Use Cases = '''Paolo Bonzini and > Cynthia Rempel''' > we want to be able to verify a guest OS's CAN driver has been integrated > properly and is sending CAN packets... > > Perhaps along the lines of two calls: > qemu-system-arm -hda linux1.img -can student-implemented-device > qemu-system-arm -hda linux2.img -can student-implemented-device You would probably use either -net, or -netdev and -device (see docs/qdev-device-use.txt). > Then using a network protocol analyzer (such as Wireshark) with a custom > filter to recognize CAN packets, > OR > qemu-system-arm -hda linux1.img -can student-implemented-device > then attaching a real CAN device to the host computer and verifying that the > output is being recognized be real hardware. Is this CAN device just an Ethernet device? QEMU does not support other link-level protocols. Adding them would be possible and interesting, however it would add a bit to the complexity. > Whichever is more feasible to implement... Both would be the same. In the first case, you'd probably use "-netdev socket" to share a virtual network between two virtual machines. In the second, you would use something like "-netdev tap" (again assuming it's just an Ethernet device). = Implementation = '''Paulo B''': Ok, learnt a bit more... You could probably implement this in two ways: 1) "-netdev socket" would probably work as a CAN->UDP gateway; 2) connecting to a virtual CAN interface in the host, created using SocketCAN (similar to "-netdev tap", e.g. "-netdev cantap"). In the first case, it would probably be useful to write the matching UDP->CAN gateway program. In any case, you have to implement both the backend and the actual device. '''Stefan Weil''': I used TCP instead of UDP in a proprietary solution more than 10 years ago. CAN devices are connected to CAN buses, so I had a CAN device emulation (on CAN API level, not on hardware level like with QEMU) and a CAN bus emulation. The CAN bus emulation was a TCP server process. It could simulate several CAN buses. Each CAN controller was a TCP client connected to the CAN bus emulation. The TCP clients sent CAN data packets (length, packet type and data) to the TCP server and received such packets from the server. They also exchanged control packets with the server (topology = which bus, data rate, CAN filter settings). The CAN bus emulation routed each received packet to other CAN controllers on the same bus (CAN is a broadcast protocol) and could also simulate error packets (for example when there was a mismatch of the data rates between sender and receiver). In debug mode, the bus emulation could also display the packets (raw data or CAN Open packets). Several CAN vendors provide bidirectional CAN-Ethernet gateways, but I don't know whether there is a standard for CAN-over-Ethernet. '''Andreas Färber''': Unfortunately that is out of date as far as the code goes (QOM is our successor to qdev), but it might serve as a good starting point. I emailed you my KVM Forum slides on QOM with a device skeleton to use as a starting point. One point I would like to point out is that QEMU devices don't simulate their hardware counterpart but instead only emulate them - that is, if you implement, e.g., a Freescale MPC5604B FlexCAN or Renesas RX62N RCAN controller you will deal with register accesses coming from the guest and their abstract concepts like mailboxes and objects rather than actual line-encodings. So if you want, you might get around some of the gory details by implementing the device using an abstract CANBus bus implementation (cf. PCIBus, I2CBus, etc.) and if necessary interfacing with whatever CAN API on the host directly; if you need to externalize this through a -chardev command line option for analysis, it probably requires some more work. '''Pavel Pisa''': 1) I think that for Linux the best option is to implement that as simple device -device can-kvasser-pcican-q or -device pcican,model=kvasser-pcican-q and use Linux mainlined SocketCAN API to connect virtual device to real CAN hardware (SocketCAN allows multiple applications access) or to SocketCAN Virtual CAN (vcan - TCP/IP lo equivalent). This is straightforward and would result in minimal overhead and latency. 2) If the portability is a problem, then we can create UDP socket and use it send CAN messages as packets (but it is newer able to model and pas over real CAN bus behavior, i.e. there are no silently lost messages on CAN side due to ACK). We have done prototype of SocketCAN to Ethernet gateway (in user-space and in Linux kernel) on contract with SocketCAN project. So we can offer some part of the code for reuse. Code is open/GPLed. 3) The option is to provide both above method or even pull in OrtCAN VCA (virtual can API) library which can route CAN messages to SocketCAN, LinCAN and possibly other targets. Problem is that there is knowledge but amount of work could easily exceed RTEMS GSoC resources. So I would incline personally to 1). But opinions, help, documentation etc are welcomed. ''' Andreas Farber''': While using a model property is not wrong per se, "can" seems too generic as type name, since it needs to inherit from a particular base class such as PCIDevice. QOM types can be made abstract to share code between device implementations to the same effect, e.g. PCIHostState. = Submitting a Patch to Qemu = [http://wiki.qemu.org/Contribute/SubmitAPatch fundamental requirements for new contributions]: = Step 1: Building a Minimal Linux Environment in Qemu = The purpose of starting with a minimal Linux (as opposed to RTEMS) environment in qemu is the software configurations have been more thoroughly tested and documented. The first phase of the project should start out using the last stable release... unless it's minor digit is a 0... usually the minor digit being anything above a 0 means the free and open source software FOSS had some bug-fix releases. = CAN project example = The section below is largely based upon http://jin-yang.github.io/2013/07/24/build-minimal-linux-environment.html by Jin-Yang during the summer of 2013. The CAN simulation environment is based on QEMU-1.4.2, because 1.5.0 was just released, and we wanted a solid starting point. = Installing Qemu = The project is hosted in a public repository which the developer has access to. For Summer of Code projects for RTEMS, as of 2013, github was the preferred repository. You can download the qemu-can project from https://github.com/Jin-Yang/QEMU-1.4.2 or get a copy using the following command(Linux). git clone https://github.com/Jin-Yang/QEMU-1.4.2.git . '''NOTE:''' before you compile QEMU, "libglib2-dev" should be installed, or you will get the error "ERROR: glib-2.12 required to compile QEMU". In Ubuntu, to do that we use command ''sudo apt-get install libglib2-dev'' First, we should configure QEMU using the following command, this will take about two minutes. QEMU has two modes to simulate a platform, the user mode and the system mode. We only need the i386 system simulation environment here, so only system mode (i386-softmmu) is needed. In order to save time, we just compile the source-code that relates to i386-softmmu. '''Note:''' For simplicity of coding, we select the hard-coded --prefix=/opt/qemu so we can write scripts assuming that's where our installed qemu is. ./configure --prefix=/opt/qemu --target-list="i386-softmmu" --enable-sdl && make && sudo make install * --prefix: specify the location where you want to install. * --target-list: the platform you want to simulation, here we only need the i386-softmmu. * --enable-sdl: we also need the sdl (Simple DirectMedia Layer) support. To call the just installed qemu you can add ''/opt/qemu/bin'' to the PATH environment variable or just use ''/opt/qemu/bin/qemu-system-i386'' to start QEMU. = Building Linux Environment = == Building the Kernel == Again, choose a stable version of Linux for testing the simulation. In this section, the environment is based on [https://www.kernel.org/pub/linux/kernel/v3.0/linux-3.4.48.tar.bz2 linux-3.4.48], [http://www.busybox.net/downloads/busybox-1.21.0.tar.bz2 busybox-1.21.0] and Ubuntu 13.04. You can get those source file from their official website. The simplest way to compile a effective kernel is to generate a default one by typing ''make i386_defconfig; make bzImage''. However, this will waste a lot of time on compiling and running something which we do not need. So, we build a minimal kernel through the following commands. cd linux-3.4.48 make mrproper make allnoconfig make menuconfig # '''Note:''' Detailed instructions below make bzImage # make the kernel image make modules make modules_install INSTALL_MOD_PATH=temp find -name bzImage # find the kernel image cp ./arch/i386/boot/bzImage ~qemu/. # put in the working directory Then, the modules has installed to ''temp'' directory. In the 'make menuconfig' step we will chose - General setup - (-QEMU) Local version - append to kernel release - Enable loadable module support - Forced module loading - Module unloading - Bus options (PCI etc.) - PCI support - PCI Express support - Root Port Advanced Error Reporting support - Executable file formats / Emulations - Kernel support for ELF binaries - Networking support - Networking options - Packet socket - Unix domain sockets - TCP/IP networking - IP: multicasting - IP: advanced router - IP: policy routing - IP: equal cost multipath - IP: verbose route monitoring - IP: kernel level autoconfiguration - IP: DHCP support - IP: BOOTP support - IP: RARP support - IP: multicast routing - IP: PIM-SM version 1 support - IP: PIM-SM version 2 support - IP: TCP syncookie support - Large Receive Offload (ipv4/tcp) - TCP: advanced congestion control - CUBIC TCP - CAN bus subsystem support - Raw CAN Protocol (raw access with CAN-ID filtering) - Broadcast Manager CAN Protocol (with contest filtering) - CAN Gateway/Router (with netlink configuration) - Virtual Local CAN Interface (vcan) - Platform CAN drivers with Netlink support - CAN bit-timing calculation - Philips/NXP SJA1000 devices - Kvaser PCIcanx and Kvaser PCIcan PCI Cards - CAN devices debugging messages - Device Drivers - Character devices - Serial drivers - 8250/16550 and compatible serial support - Network device support - Network core driver support - Universal TUN/TAP device driver support - Ethernet driver support - Intel devices - Inter(R) PRO/1000 Gigabit Ethernet support - Inter(R) PRO/1000 PCI-Express Gigabit Ethernet support - File systems - Pseudo filesystems - Tmpfs virtual memory file system support (former shm fs) - Tmpfs POSIX Access Control List - Network File System - NFS client support - NFS client support for NFS version 3 - NFS client support for the NFSv3 ACL protocol extension - NFS client support for NFS version 4 - Root file system on NFS == Build a root file system in ~/qemu/rootfs == Build the basic commands through busybox tar -xf busybox-1.20.0.tar.bz2 cd busybox-1.20.0 make defconfig make menuconfig # Note: ''Busybox Settings -> Build options -> Build Busybox as a static binary'' make make install CONFIG_PREFIX=~/qemu/rootfs # choose this hard coded value for simplicity when writing scripts == Set up network environment == We use TUN/TAP to connect host with qemu. We start by writing the '''ifup-qemu''' script. We will put it in the ''~/qemu'' directory so it will be in the correct place relative to the other files. #!/bin/sh # TUN/TAP to connect host with qemu ifconfig $1 192.168.9.33 == Starting Custom Qemu == Finally, we can start our patched qemu by '''TODO:''' ran into an error like: ''~/qemu/ifup-qemu: could not launch network script qemu-system-i386: -net tap,vlan=0,ifname=tap0,script=~/qemu/ifup-qemu: Device 'tap' could not be initialized.'' Need to add instructions for permitting the TAP device to work... /opt/qemu/bin/qemu-system-i386 -s -m 128 -kernel bzImage \ -append "notsc clocksource=acpi_pm root=/dev/nfs \ nfsroot=192.168.9.33:~/qemu/rootfs rw \ ip=192.168.9.88:192.168.9.33:192.168.9.33:255.255.255.0::eth0:auto \ init=/sbin/init" \ -net nic,model=e1000,vlan=0,macaddr=00:cd:ef:00:02:01 \ -net tap,vlan=0,ifname=tap0,script=~/qemu/ifup-qemu = Step 2: Introduce SocketCAN and SJA1000 = We have built a minimal linux environment now. Before we built the PCI-CAN device in QEMU, we will introduce the Linux driver SocketCAN and CAN device SJA1000. = SocketCAN = The SocketCAN package is an implementation of CAN protocols (Controller Area Network) for Linux. You can get a manual from the source of Linux Kernel at Documention/networking/can.txt . Simplely, we only need to insert the vcan module. Following is what we need to do. $ sudo insmod /lib/modules/`uname -r`/kernel/drivers/net/can/vcan.ko $ sudo ip link add type vcan $ sudo ip link set vcan0 up Then you can check the CAN interface using command ''ifconfig vcan0'' . You can test the can networking using can-utils, that you can get from https://gitorious.org/linux-can/can-utils . ./candump vcan0 # Observe vcan0 ./cangen vcan0 # Generate CAN message = SJA1000 = The SJA1000, a stand-alone CAN controller, is produced by Philips Semiconductors which is more than a simple replacement of the PCA82C200. You can get more useful information from [http://www.nxp.com/documents/data_sheet/SJA1000.pdf SJA1000 Datasheet] and [http://www.mct.net/download/philips/can_appnote.pdf SJA1000 Application Note]. SJA1000 is intended to replace the PCA82C200 because it is hardware and software compatible. It has two modes including BasicCAN mode and PeliCAN mode. = Step 3: Build a Basic PCI-CAN device in qemu = Based upon http://jin-yang.github.io/2013/07/25/add-pcidevice-to-qemu.html, https://github.com/Jin-Yang/QEMU-1.4.2/commit/ffef2998d0208547031887aab2ae2fc7780f2ea4, and https://github.com/Jin-Yang/QEMU-1.4.2/commit/a37c1c0ce934f15c36688bf88a94d94e19f1f6c3 written by Jin Yang during the summer of 2013 = Writing the Patches for a Basic PCI-CAN device = There are some files that need to be modified to make a basic PCI-CAN device: hw/Makefile.objs, qemu-char.c, hw/can-pci.c. * hw/Makefile.objs needs to be modified to add the object to the qemu build * qemu-char.c is the file that is used for the can-char driver * hw/can-pci.c is the can-pci device == hw/Makefile.objs == To get qemu to recognize new source files, it's necessary to add the new source files to the system. Below is the change to the hw/Makefile.objs to get the qemu build system to recognize can-pci.o as one of the qemu objects. For simplicity of the project, instead of implementing CONFIG_CAN, it is hard-coded to y. '''Possible Improvement:''' In later summers of code, $(CONFIG_CAN) could be made configurable, to ease getting accepted into the qemu head. This would require tracing down all the changes needed to implement CONFIG_CAN hw/Makefile.objs @@ -27,6 +27,7 @@ common-obj-$(CONFIG_EMPTY_SLOT) += empty_slot.o common-obj-$(CONFIG_SERIAL) += serial.o serial-isa.o common-obj-$(CONFIG_SERIAL_PCI) += serial-pci.o +common-obj-y += can-pci.o common-obj-$(CONFIG_PARALLEL) += parallel.o common-obj-$(CONFIG_I8254) += i8254_common.o i8254.o common-obj-$(CONFIG_PCSPK) += pcspk.o == qemu-char.c == The first part of the patch is a custom '''DPRINTF''' for testing out the can simulation. #define READ_BUF_LEN 4096 @@ -99,6 +99,81 @@ +#define DEBUG_CAN +#ifdef DEBUG_CAN +#define DPRINTF(fmt, ...) \ + do { fprintf(stderr, "[mycan]: " fmt , ## __VA_ARGS__); } while (0) +#else +#define DPRINTF(fmt, ...) \ + do {} while (0) +#endif can_chr_write prints out information to the terminal. +static int can_chr_write(CharDriverState *chr, const uint8_t *buf, int len) +{ + DPRINTF("%s-%s() called\n", __FILE__, __FUNCTION__); + return 0; +} prints a debugging message +static void can_chr_close(struct CharDriverState *chr) +{ + DPRINTF("%s-%s() called\n", __FILE__, __FUNCTION__); +} prints a message, allocates a CharDriverState pointer, defines the chr_write and chr_close to be the debugging functions above. +static CharDriverState *qemu_chr_open_can(QemuOpts *opts) +{ + CharDriverState *chr; + DPRINTF("%s-%s() called\n", __FILE__, __FUNCTION__); + + chr = g_malloc0(sizeof(CharDriverState)); + + chr->chr_write = can_chr_write; + chr->chr_close = can_chr_close; + + return chr; + +} Next, add a device to the character device table, so that the device can be called when qemu is called. '''Note:''' add the use-case here. /* character device */ @@ -2992,6 +3067,7 @@ static CharDriverState *qemu_chr_open_pp(QemuOpts *opts) { .name = "serial", .open = qemu_chr_open_win }, { .name = "stdio", .open = qemu_chr_open_win_stdio }, #else + { .name = "can", .open = qemu_chr_open_can }, { .name = "file", .open = qemu_chr_open_file_out }, { .name = "pipe", .open = qemu_chr_open_pipe }, { .name = "stdio", .open = qemu_chr_open_stdio }, The above is an OLD one. You can get some newer update at [http://jin-yang.github.io/2013/07/24/build-minimal-linux-environment.html JinYang's Blog]. = Running the Example = The following commands should be executed in host. $ sudo insmod /lib/modules/`uname -r`/kernel/drivers/net/can/vcan.ko $ sudo ip link add type vcan $ sudo link set vcan0 up change your directory to can-utils, $ ./candump vcan0 then start qemu, and input the following commands. # cd /home/qemu_test_pci # ./load # ./a.out you will get the output from "./candump vcan0" command, like following vcan0 123 [3] 12 34 56 vcan0 12345678 [6] 12 34 56 78 90 AB This means we send a SFF message and EFF message to the host through SocketCAN. = Step 4: Test = = QEMU PCI-CAN device = I have maitained a blog at [http://jin-yang.github.io/2013/07/25/add-pcidevice-to-qemu.html Add pci-can]. There are a lots of temperay information, so I will update this at the end of the project. Then only useful message will be updated here. = RTEMS environment = You can get a basic simulation environment from https://github.com/Jin-Yang/RTEMS-QEMU. This is a simple environment. However before run the RTEMS, environment variables 'PATH' and 'RTEMS_MAKEFILE_PATH' should be added to ~/.bashrc, such as export PATH=/opt/rtems-4.10/bin:${PATH} export RTEMS_MAKEFILE_PATH=/opt/rtems-4.10/i386-rtems4.10/pc386 Then, you can run it just using ./qemu Source file located at examples-v2/ and before runing, rtems-grub.cfg should be edited too. = Adding Qemu to the RTEMS Source Builder = Although not necessarily part of the CAN project one interesting issue that came up was the need to hook qemu into the RTEMS Source Builder for gcc-testing purposes. The issues brought up with adding qemu to the RTEMS Source Builder included the question of how to build qemu on all supported host platforms, in particular, how to build qemu from source on Windows and Mac (in addition to Linux). Some resources for building on Windows are: '''MinGW''' # http://wiki.qemu.org/Hosts/W32#Native_builds # http://www.mingw.org/wiki/Bootstrapping_GLIB_with_MinGW '''Cygwin''' # http://cygwin.com/packages/ Some resources for building on Mac are: # http://www.rubenerd.com/qemu-1-macosx/ = References = # https://lists.gnu.org/archive/html/qemu-devel/2013-05/threads.html # http://www.linux-kvm.org/wiki/images/f/f6/2012-forum-QOM_CPU.pdf