wiki:Developer/Simulators/QEMU/CANEmulation

Version 85 (modified by PavelPisa, on 08/11/14 at 18:22:12) (diff)

/* CAN QEMU Sources and Testing */

QEMU with CAN Emulation

Introduction to CAN QEMU support

The intention to provide generic CAN subsystem for RTEMS was initial idea for Jin Yang GSoC 2013 project. But lack of common environment for code and RTEMS testing lead to goal change to provide environment which provides complete emulated environment for testing and RTEMS GSoC slot has been donated to work on CAN hardware emulation on QEMU.

The PCI addon card hardware has been selected as the first CAN interface to implement because such device can be easily connected to systems with different CPU architectures (x86, PowerPC, ARM, etc.). We decided to focus on SJA1000 chip, because it is spread standalone controller variant. As for the concrete card we have selected Kvaser PCI which we (mentors from Czech Technical University IIG group) because we have more these cards in our project and can compare emulated HW behavior with real hardware. We selected interfacing to SocketCAN (Linux kernel standard CAN API/drivers) on QEMU host side to connect emulated controler CAN bus to virtual CAN network, monitoring tools or to real CAN hardware bus.

CAN QEMU Sources and Testing

The sources of QEMU with CAN PCI board and SocketCAN interfacing can be found in branch "can-pci" of the IIG group QEMU repository at github https://github.com/CTU-IIG/qemu. Actual version is based on QEMU v2.1.

When QEMU with CAN PCI support is compiled then next two CAN boards can be selected

  • simple PCI memory space mapped SJA1000 which maps directly into first BAR of emulated device. QEMU startup options

-device pci_can,chardev=canbus0,model=SJA1000

  • CAN bus Kvaser PCI CAN-S (single SJA1000 channel) boad. QEMU startup options

-device kvaser_pci,canbus=canbus0,host=vcan0

The kvaser_pci board/device model is compatible with and has been tested with kvaser_pci driver included in mainline Linux kernel for 3 years already. The tested setup was Linux 3.12 kernel on the host and guest side.

Next parameters has been used for qemu-system-x86_64

qemu-system-x86_64 -enable-kvm -kernel /boot/vmlinuz-3.2.0-4-amd64 \

-initrd ramdisk.cpio \ -virtfs local,path=shareddir,security_model=none,mount_tag=shareddir \ -vga cirrus \ -device kvaser_pci,canbus=canbus0,host=can0 \ -nographic -append "console=ttyS0"

The list of parameters for qemu-system-arm

qemu-system-arm -cpu arm1176 -m 256 -M versatilepb \

-kernel kernel-qemu-arm1176-versatilepb \ -hda rpi-wheezy-overlay \ -append "console=ttyAMA0 root=/dev/sda2 ro init=/sbin/init-overlay" \ -nographic \ -virtfs local,path=shareddir,security_model=none,mount_tag=shareddir \ -device kvaser_pci,canbus=canbus0,host=can0 \

The previous CAN for QEMU versions

The Jin Yang initial effort has targeted stable version QEMU-1.4.2 and Linux with mainline provided SocketCAN has been used on host side.

  • Host: The host was Ubuntu-13.04 including the basic building tools and some other software.
  • Guest: This is also a Linux environment which we build from scratch. The Linux kernel is Linux-3.4.48, the commands are statically compiled using Busybox-1.21.0 and also some other files including configure files, start scripts, etc.
  • Others: Qemu-1.4.2

The best option is to implement that as simple device

-chardev can,id=sja1000,port=vcan0 -device pci-can,chardev=sja1000,model=SJA1000

and use Linux mainlined SocketCAN API to connect virtual device to real CAN hardware (SocketCAN allows multiple applications access) or to SocketCAN Virtual CAN (vcan - TCP/IP lo equivalent). This is straightforward and would result in minimal overhead and latency.

The above options connect the can back end to the PCI-CAN device. Just explain the options briefly.

  • -chardev can,id=sja1000,port=vcan0 specify which back end should we open, here can. id used for the -device option; port which interface should we open, we use virtual socketcan device here.
  • -device pci-can,chardev=sja1000,model=SJA1000 we use pci-can device. chardev the value MUST equal to the id specified in chardev option; model is an optional parameter, default value is SJA1000 and we only support SJA1000 now.

The QEMU CAN infrastructure has been rewritten and SJA1000 code updated to QEMU 2.0 version

Installing Qemu

The project is hosted in a public repository which the developer has access to. For Summer of Code projects for RTEMS, as of 2013, github was the preferred repository.

You can download the qemu-can project from https://github.com/Jin-Yang/QEMU-1.4.2 or get a copy using the following command(Linux).

git clone https://github.com/Jin-Yang/QEMU-1.4.2.git .

NOTE: before you compile QEMU, "libglib2-dev" should be installed, or you will get the error "ERROR: glib-2.12 required to compile QEMU". In Ubuntu, to do that we use command sudo apt-get install libglib2-dev

First, we should configure QEMU using the following command, this will take about two minutes. QEMU has two modes to simulate a platform, the user mode and the system mode. We only need the i386 system simulation environment here, so only system mode (i386-softmmu) is needed. In order to save time, we just compile the source-code that relates to i386-softmmu.

NOTE: For simplicity of coding, we select the hard-coded --prefix=/opt/qemu so we can write scripts assuming that's where our installed qemu is.

./configure --prefix=/opt/qemu --target-list="i386-softmmu" --enable-sdl && make && sudo make install

  • --prefix: specify the location where you want to install.
  • --target-list: the platform you want to simulation, here we only need the i386-softmmu.
  • --enable-sdl: we also need the sdl (Simple DirectMedia? Layer) support.

To call the just installed qemu you can add /opt/qemu/bin to the PATH environment variable or just use /opt/qemu/bin/qemu-system-i386 to start QEMU. Here, we just write a simple shell script and you can start the qemu just use command sudo ./qemu.sh

Building Linux Environment

Building the Kernel

Again, choose a stable version of Linux for testing the simulation.

In this section, the environment is based on linux-3.4.48, busybox-1.21.0 and Ubuntu 13.04. You can get those source file from their official website.

The simplest way to compile a effective kernel is to generate a default one by typing make i386_defconfig; make bzImage. However, this will waste a lot of time on compiling and running something which we do not need yet. So, we will build a minimal kernel through the following commands.

cd linux-3.4.48 # change to the linux kernel directory make mrproper # clean up the source code make allnoconfig # disable all configure options make menuconfig # NOTE: Detailed instructions below make bzImage # compile the kernel image make modules # compile the linux modules make modules_install INSTALL_MOD_PATH=temp # install the linux modules to a temperary directory find -name bzImage # find the kernel image cp ./arch/i386/boot/bzImage ~qemu/. # copy it to the working directory

Then, the modules has installed to temp directory, so you can copy them to the necessary directory if need.

In the 'make menuconfig' step we will chose the following options

  • General setup # for convenience
    • (-QEMU) Local version - append to kernel release

  • Enable loadable module support # support for the modules
    • Forced module loading
    • Module unloading

  • Bus options (PCI etc.) # a PCI device developed here, so we need the kernel supports
    • PCI support
      • PCI Express support
        • Root Port Advanced Error Reporting support

  • Executable file formats / Emulations # to run examples
    • Kernel support for ELF binaries

  • Networking support # we using NFS, maybe some of the options don't need
    • Networking options
      • Packet socket
  • Unix domain sockets
    • TCP/IP networking
      • IP: multicasting
      • IP: advanced router
        • IP: policy routing
        • IP: equal cost multipath
        • IP: verbose route monitoring
      • IP: kernel level autoconfiguration
        • IP: DHCP support
        • IP: BOOTP support
        • IP: RARP support
      • IP: multicast routing
        • IP: PIM-SM version 1 support
        • IP: PIM-SM version 2 support
      • IP: TCP syncookie support
      • Large Receive Offload (ipv4/tcp)
      • TCP: advanced congestion control
        • CUBIC TCP

  • <M> CAN bus subsystem support # actually we build a new CAN driver, not necessary
    • <M> Raw CAN Protocol (raw access with CAN-ID filtering)
    • <M> Broadcast Manager CAN Protocol (with contest filtering)
    • <M> CAN Gateway/Router? (with netlink configuration)
      • <M> Virtual Local CAN Interface (vcan)
      • <M> Platform CAN drivers with Netlink support
      • CAN bit-timing calculation
      • <M> Philips/NXP SJA1000 devices
      • <M> Kvaser PCIcanx and Kvaser PCIcan PCI Cards
      • CAN devices debugging messages

  • Device Drivers
    • Character devices
      • Serial drivers # for serial test
        • <M> 8250/16550 and compatible serial support
    • Network device support # for NFS
      • Network core driver support
      • Universal TUN/TAP device driver support
      • Ethernet driver support
        • Intel devices
          • Inter(R) PRO/1000 Gigabit Ethernet support
          • Inter(R) PRO/1000 PCI-Express Gigabit Ethernet support

  • File systems
    • Pseudo filesystems
      • Tmpfs virtual memory file system support (former shm fs)
        • Tmpfs POSIX Access Control List
    • Network File System
      • NFS client support
      • NFS client support for NFS version 3
      • NFS client support for the NFSv3 ACL protocol extension
      • NFS client support for NFS version 4
      • Root file system on NFS

Build a root file system in ~/qemu/rootfs

Build the basic commands through busybox

tar -xf busybox-1.20.0.tar.bz2 cd busybox-1.20.0 make defconfig make menuconfig # NOTE: Busybox Settings -> Build options -> Build Busybox as a static binary make make install CONFIG_PREFIX=~/qemu/rootfs # choose this hard coded value for simplicity when writing scripts

We also need some necessary directories and some files to build the root file system. However, it's not the main part of the project, you can get more help from google. You can just get the from a public repository https://github.com/Jin-Yang/LINUX-QEMU.git

Set up network environment

We use TUN/TAP to connect host with qemu. We start by writing the ifup-qemu script. We will put it in the ~/qemu directory so it will be in the correct place relative to the other files.

NOTE: this need the root privilege.

#!/bin/sh # TUN/TAP to connect host with qemu ifconfig $1 192.168.9.33

Starting Custom Qemu

Finally, we can start our patched qemu by

TODO: ran into an error like: ~/qemu/ifup-qemu: could not launch network script qemu-system-i386: -net tap,vlan=0,ifname=tap0,script=~/qemu/ifup-qemu: Device 'tap' could not be initialized. Need to add instructions for permitting the TAP device to work...

/opt/qemu/bin/qemu-system-i386 -s -m 128 -kernel bzImage \

-append "notsc clocksource=acpi_pm root=/dev/nfs \ nfsroot=192.168.9.33:~/qemu/rootfs rw \ ip=192.168.9.88:192.168.9.33:192.168.9.33:255.255.255.0::eth0:auto \ init=/sbin/init" \ -net nic,model=e1000,vlan=0,macaddr=00:cd:ef:00:02:01 \ -net tap,vlan=0,ifname=tap0,script=~/qemu/ifup-qemu

If you have got a copy from https://github.com/Jin-Yang/LINUX-QEMU.git, then you can start the qemu just by command sudo ./qemu.sh

Step 2: Introduce SocketCAN and SJA1000

We have built a minimal linux environment now. Before we built the PCI-CAN device in QEMU, we will introduce the Linux driver SocketCAN and CAN device SJA1000 at first.

SocketCAN

The SocketCAN package is a Linux driver of CAN (Controller Area Network) protocols for Linux. You can get a manual from the source of Linux Kernel at Documention/networking/can.txt

Simplely, we only need to insert the vcan module and use virtual socketcan device to test the socketcan. Following is what we need to do.

$ sudo insmod /lib/modules/uname -r/kernel/drivers/net/can/vcan.ko $ sudo ip link add type vcan $ sudo ip link set vcan0 up

Then you can check the CAN interface using command ifconfig vcan0 . You can test the can networking using can-utils, that you can get from https://gitorious.org/linux-can/can-utils .

./candump vcan0 # Observe vcan0 ./cangen vcan0 # Generate CAN message

SJA1000

The SJA1000, a stand-alone CAN controller, is produced by Philips Semiconductors which is more than a simple replacement of the PCA82C200. You can get more useful information from SJA1000 Datasheet and SJA1000 Application Note.

SJA1000 is intended to replace the PCA82C200 because it is hardware and software compatible. It has two modes including BasicCAN mode and PeliCAN mode, the details you can get from the above two documents.

Step 3: Build a Basic PCI-CAN device in qemu

We use QEMU Object Model(QOM, formerly Qdev) to simulate the SJA1000 hardware. You can get some useful information from QEMU Object Model which intruduce the QOM.

There are some files that need to be modified to make a basic PCI-CAN device: default-configs/pci.mak, hw/Makefile.objs, qemu-char.c, hw/can-pci.c.

  • default-configs/pci.mak add default compiling options
  • config-all-devices.mak add all compiling options
  • hw/Makefile.objs needs to be modified to add the object to the qemu build
  • qemu-char.c is the file that is used for the can-char driver
  • hw/can-pci.c is the can-pci device

The specification of the pci-can device lists in following.

Name : pci-can PCI ID : 1b36:beef PCI Region 0:

MEM bar, 128 bytes long, you can get the details about the memory map from SJA1000 Data Sheet.

You can get the device information using command info pci in monitor console. Those arguments are initialized in function can_pci_class_initfn() and can_pci_init().

Add the Default Compiling Files

To get qemu to recognize new source files, it's necessary to add the new source files to the system. Below is the change to the hw/Makefile.objs, default-configs/pci.mak and config-all-devices.mak to get the qemu build system to recognize can-pci.o as one of the qemu objects.

We want to compile the PCI-CAN device as default. To achieve this goal, we add the following code.

# hw/Makefile.objs common-obj-$(CONFIG_CAN_PCI) += can-pci.o # default-configs/pci.mak CONFIG_CAN_PCI=y # config-all-devices.mak CONFIG_CAN_PCI=y

Add the PCI-CAN structure

We can add a basic PCI-CAN device through the following code, the function can_pci_class_initfn() is used to initialize the device. There is a PCI TestDev in QEMU as a reference.

static const TypeInfo? can_pci_info = {

.name = "pci-can", .parent = TYPE_PCI_DEVICE, .instance_size = sizeof(PCICanState), .class_init = can_pci_class_initfn,

}; static void can_pci_register_types(void) {

type_register_static(&can_pci_info);

} type_init(can_pci_register_types)

Add Start Options

When start QEMU, we can change the device properties through start options. Structure Property is used to set the device properties. There are some macros in hw/qdev-properties.h that help us to initialize the property. We take the following code as an example.

static Property can_pci_properties[] = {
    DEFINE_PROP_CHR("chardev",  PCICanState, state.chr),
	DEFINE_PROP_STRING("model",  PCICanState, model),
    DEFINE_PROP_END_OF_LIST(),
};}}}
When start QEMU, we could change the PCI-CAN's properties through **chardev=xxx,model=xxxx**. The argument model is optinal, we use SJA1000 as default and only SJA1000 support now.
=  Add CAN back-end  =

Before add the function for SJA1000, we would introduce how to add the back-end in QEMU at first. To add the CAN back-end we should modefy the qemu-char.c file, this will be introduced step by step.
==  Change backend_table[]  ==

We should add a variable to backend_table[] to support a new backend just like following.
   { .name = "CAN",          .open = qemu_chr_open_can },
The qemu_char_open_can() function is used to initialize the backend including analyzing the start options, allocating some memory, opening the SocketCAN device, setting the CAN filter etc.
==  Add the start options  ==

We should start the backend like "-chrdev can,id=sja1000,port=vcan0". The valid options for backend list in qemu_chardev_opts variable. The argument 'port' is part of them, so we can use it directly or you should add the option you want to the qemu_chardev_opts variable. When qemu started, you can get the value using qemu_opt_get() function.
==  Add writing routine  ==

The writing routine is simple. We use the can_chr_write() function as a writing routine, we also should assign this function to CharDriverState.chr_write If we want to call the writing routine in PCI-CAN device, the function qemu_chr_fe_write() should be called instead of calling the can_chr_write() function directly.

The source code is just like the following, details in qemu-char.c file.
  static int can_chr_write(CharDriverState *chr, const uint8_t *buf, int len)
  {
    ......
  }
  static CharDriverState *qemu_chr_open_can(QemuOpts *opts)
  {
    CharDriverState *chr;
    ......
    chr->chr_write = can_chr_write;
    ......
  }
This is same with the ioctl funciton.
==  Add reading routine  ==

The reading routine is a little complicate than the writing routine. Because we don't known when we should the CAN message, so we should check the file discriptor all the time. I will simply explain how this works.

QEMU use IOhandlder thread to deal with this kind of operation. The main_loop_wait() function in main-loop.c will be always be executed. This actually called main_loop_wait()-main-loop.c => os_host_main_loop_wait()-main-loop.c => select(), which means the system call select() will be called all the time.

So what we should do is like following
 *  figure out if we need to read from the real device.
 *  wait for reading from the device.
 *  read from the device and call the corresponding function.

QEMU apply this like following
 *  initializing, qemu_set_fd_handler2()-iohandler.c and qemu_chr_add_handlers()-qemu-char.c should be called seperately in can backend and pci-can device.
 *  qemu_iohandler_fill()-iohandler.c is called to test if the pci-can device need to read.
 *  system call select is called to wait for CAN message.
 *  qemu_iohandler_poll()-iohandler.c is called to write the data to pci-can device.

  static void can_chr_update_read_handler(CharDriverState *chr)
  {
    ......
    qemu_set_fd_handler2(d->fd, can_chr_read_poll, can_chr_read, NULL, chr);
    ......
  }
  static CharDriverState *qemu_chr_open_can(QemuOpts *opts)
  {
    ......
    chr->chr_update_read_handler = can_chr_update_read_handler;
    ......
  }

In can_chr_update_read_handler(), we would call qemu_set_fd_handler2() function to register the reading routine function. The 2nd argument is called to figure out if we need to read from the real device, if need the reture value should bigger than 0 or equal to 0. The 3rd argument is called when we should to read the data.


So in can_chr_read_poll() function we will figure out if we need to get some data from the device and in can_chr_read() function we will wirte the data to the pci-can device.
=  Step 4: Write Linux driver  =

Until now we have built the structure of PCI-CAN device the details about how the device works you can get from the source code. Here we will talk about the Linux driver.

Unlike the SocketCAN driver which use the socket, we developed a new char device driver. Only receiving and sending routines support now. This means if you want to test the CAN filter you should change the source of Linux driver. Since the SJA1000 has two kinds of mode BasicCAN and PeliCAN, so we build two linux driver to support them.
=  Step 5: Test  =

You can get the test environment form site ''https://github.com/Jin-Yang/LINUX-QEMU.git''. There is a simple intruduce in README about what the files or directories for.
=  Add SocketCAN device  =

The following commands should be executed in host.
 $ sudo insmod /lib/modules/`uname -r`/kernel/drivers/net/can/vcan.ko
 $ sudo ip link add type vcan
 $ sudo link set vcan0 up
=  Start qemu  =

I write a simple start script for it, you can just type the following command to start it.
  sudo ./qemu.sh
=  Linux driver  =

Under source/ directory, there is a sub-directory linux_driver/ that contains the source code for linux driver. At first, some environment variables should be changed, details in README file. Then you can compile and copy it to the root file system through
  make
  sudo ./copy.sh
In qemu, change directory to /home/can_pci/basic/ and insert the linux driver by ''./load'', remove by ''./unload''.

=  Send routine  =

Still under source/examples directory, you can compile the source code by ''make'' and copy to root file system by ''sudo make install''. Then monitor the vcan0 in host by ''./candump vcan0'', and send the CAN message in QEMU by ''./send''. Finaly you will get the CAN message like following.

Monitor the vcan0 interface. 
  ~/can-utils$ ./candump vcan0
  open 0 'vcan0'
  using interface name 'vcan0'
  new index 0 (vcan0)
then send a CAN message in QEMU
  root:/home/can_pci/basic# ./send
finaly, get the message in host
  ~/can-utils$ ./candump vcan0
  open 0 'vcan0'
  using interface name 'vcan0'
  new index 0 (vcan0)
  vcan0 123 [3] 12 34 56     # this is the message we send.
=  Receive routine  =

After the above step, you can test receiving routine by ''./receive'' in QEMU and ''./cangen vcan0 -g 1000'' in host. Then you will get the CAN message in QEMU.

Start receiving routine in QEMU
  root:/home/can_pci/basic#./receive
send message in host
  ~/can-utils$ ./cangen vcan0 -g 1000 # send a CAN message per second.
then you will receive the CAN message in QEMU like
  root:/home/can_pci/basic#./receive
  761 [8] -SFF DAT- 4E 9C F1 05 72 8C FB 7E
  058 [4] -SFF DAT- 80 D6 B0 25
  2D4 [1] -SFF DAT- 55










= QEMU PCI-CAN device =

I have maitained a blog at [http://jin-yang.github.io/2013/07/25/add-pcidevice-to-qemu.html Add pci-can]. There are a lots of temperay information, so I will update this at the end of the project. Then only useful message will be updated here.
= RTEMS environment =

You can get a basic simulation environment from https://github.com/Jin-Yang/RTEMS-QEMU. This is a simple environment. However before run the RTEMS, environment variables 'PATH' and 'RTEMS_MAKEFILE_PATH' should be added to ~/.bashrc, such as
   export PATH=/opt/rtems-4.10/bin:${PATH}
   export RTEMS_MAKEFILE_PATH=/opt/rtems-4.10/i386-rtems4.10/pc386
Then, you can run it just using
   ./qemu

Source file located at examples-v2/ and before runing, rtems-grub.cfg should be edited too.
= Adding Qemu to the RTEMS Source Builder =

Although not necessarily part of the CAN project one interesting issue that came up was the need to hook qemu into the RTEMS Source Builder for gcc-testing purposes.

The issues brought up with adding qemu to the RTEMS Source Builder included the question of how to build qemu on all supported host platforms, in particular, how to build qemu from source on Windows and Mac (in addition to Linux).

Some resources for building on Windows are:

'''MinGW'''
# http://wiki.qemu.org/Hosts/W32#Native_builds
# http://www.mingw.org/wiki/Bootstrapping_GLIB_with_MinGW

'''Cygwin'''
# http://cygwin.com/packages/

Some resources for building on Mac are:
# http://www.rubenerd.com/qemu-1-macosx/
=  References  =

# https://lists.gnu.org/archive/html/qemu-devel/2013-05/threads.html
# http://www.linux-kvm.org/wiki/images/f/f6/2012-forum-QOM_CPU.pdf