Thursday, September 12, 2019

Unit-testing static functions in C

An annoying thing about C code is that there are plenty of functions that cannot be unit-tested by some external framework - specifically anything declared as static. Any larger code-base will end up with hundreds of those functions, many of which are short and reasonably self-contained but complex enough to not trust them by looks only. But since they're static I can't access them from the outside (and "outside" is defined as "not in the same file" here).

The approach I've chosen in the past is to move the more hairy ones into separate files or at least declare them normally. That works but is annoying for some cases, especially those that really only get called once. In case you're wondering whether you have at least one such function in your source tree: yes, the bit that parses your commandline arguments is almost certainly complicated and not tested.

Anyway, this week I've finally found the right combination of hacks to make testing static functions easy, and it's:

  • #include the source file in your test code.
  • Mock any helper functions you'd need to trick the called functions
  • Instruct the linker to ignore unresolved symbols
And boom, you can write test cases to only test a single file within your source tree. And without any modifications to the source code itself.

A more detailed writeup is available in this github repo.

For the impatient, the meson snippet for a fictional source file example.c would look like this:

test('test-example',
     executable('test-example',
                'example.c', 'test-example.c',
                dependencies: [dep_ext_library],
                link_args: ['-Wl,--unresolved-symbols=ignore-all',
                            '-Wl,-zmuldefs',
                            '-no-pie'],
                install: false),
)

There is no restriction on which test suite you can use. I've started adding a few of test cases based on this approach to libinput and so far it's working well. If you have a better approach or improvements, I'm all ears.

Monday, August 26, 2019

Tuhi - an application to support Wacom SmartPad devices

Sounds like déjà vu? Right, I posted a post with an almost identical title 18 months ago or so. This is about Tuhi 0.2, new and remodeled and completely different to that. Sort-of.

Tuhi is an application that supports the Wacom SmartPad devices - Bamboo Spark, Bamboo Slate, Bamboo Folio and Intuos Pro. The Bamboo range are digital notepads. They come with a real pen, you draw normally on the pad and use Bluetooth LE and Wacom's Inkspace application later to sync the files to disk. The Intuos Pro is the same but it's designed as a "normal" tablet with the paper mode available as well.

18 months ago, Benjamin Tissoires and I wrote Tuhi as a DBus session daemon. Tuhi would download the drawings from the file and make them available as JSON files over DBus to be converted to SVG or some other format by ... "clients". We wrote a simple commandline tool to debug Tuhi but no GUI, largely in the hope that maybe someone would be interested in doing that. Fast forward to now and that hasn't happened but I had some spare cycles over the last weeks so I present to you: Tuhi 0.2, now with a GTK GUI:

It's basic but also because it shouldn't do much more than just downloading the drawings and allowing you to save them. This is not an editing UI, it's effectively a file manager for the drawings on the tablet. And since by design those drawings get deleted as you download them, there isn't even much to that (don't worry, Tuhi doesn't really delete files, you can recover almost everything).

Under the hood there were some internal changes too but I suspect they'll be boring to most. The more interesting bits are reworks so we can test the conversions a lot better now and - worst case - recover files if Tuhi crashes. It is largely reverse-engineered after all.

On that note I would like to also extend my thanks to Wacom who have provided us with some of the specs for the protocol (under NDA, we cannot share these with the community, sorry). These specs helped tremendously understanding the protocol bits that were confusing at best and unknown at worst. There are still some corners in the protocol that we don't know but for the most recent generation (i.e. Intuos Pro) we should have correct parsing of the protocol.

And many thanks to Jakub Steiner for the fancy logo.

And, as of a few minutes ago, Tuhi is available as flatpak from flathub.org. For the forseeable future is is the best way to install Tuhi.

Wednesday, July 17, 2019

libinput's new thumb detection code

The average user has approximately one thumb per hand. That thumb comes in handy for a number of touchpad interactions. For example, moving the cursor with the index finger and clicking a button with the thumb. On so-called Clickpads we don't have separate buttons though. The touchpad itself acts as a button and software decides whether it's a left, right, or middle click by counting fingers and/or finger locations. Hence the need for thumb detection, because you may have two fingers on the touchpad (usually right click) but if those are the index and thumb, then really, it's just a single finger click.

libinput has had some thumb detection since the early days when we were still hand-carving bits with stone tools. But it was quite simplistic, as the old documentation illustrates: two zones on the touchpad, a touch started in the lower zone was always a thumb. Where a touch started in the upper thumb area, a timeout and movement thresholds would decide whether it was a thumb. Internally, the thumb states were, Schrödinger-esque, "NO", "YES", and "MAYBE". On top of that, we also had speed-based thumb detection - where a finger was moving fast enough, a new touch would always default to being a thumb. On the grounds that you have no business dropping fingers in the middle of a fast interaction. Such a simplistic approach worked well enough for a bunch of use-cases but failed gloriously in other cases.

Thanks to Matt Mayfields' work, we now have a much more sophisticated thumb detection algorithm. The speed detection is still there but it better accounts for pinch gestures and two-finger scrolling. The exclusion zones are still there but less final about the state of the touch, a thumb can escape that "jail" and contribute to pointer motion where necessary. The new documentation has a bit of a general overview. A requirement for well-working thumb detection however is that your device has the required (device-specific) thresholds set up. So go over to the debugging thumb thresholds documentation and start figuring out your device's thresholds.

As usual, if you notice any issues with the new code please let us know, ideally before the 1.14 release.

Wednesday, June 19, 2019

libinput and tablet proximity handling

This is merely an update on the current status quo, if you read this post in a year's time some of the details may have changed

libinput provides an API to handle graphics tablets, i.e. the tablets that are used by artists. The interface is based around tools, each of which can be in proximity at any time. "Proximity" simply means "in detectable range". libinput promises that any interaction is framed by a proximity in and proximity out event pair, but getting to this turned out to be complicated. libinput has seen a few changes recently here, so let's dig into those. Remember that proverb about seeing what goes into a sausage? Yeah, that.

In the kernel API, the proximity events for pens are the BTN_TOOL_PEN bit. If it's 1, we're in proximity, if it's 0, we're out of proximity. That's the theory.

Wacom tablets (or rather the kernel driver) always reset all axes on proximity out. So libinput needs to take care not to send a 0 value to the caller, lest you want a jump to the top left corner every time you move the pen away from the tablet. Some Wacom pens have serial numbers and we use those to uniquely identify a tool. But some devices start sending proximity and axis events before we get the serial numbers which means we can't identify the tool until several ms later. In that case we simply discard the serial. This means we cannot uniquely identify those pens but so far no-one has complained.

A bunch of tablets (HUION) don't have proximity at all. For those, we start getting events and then stop getting events, without any other information. So libinput has a timer - if we don't get events for a given time, we force a proximity out. Of course, this means we also need to force a proximity in when the next event comes in. These tablets are common enough that recently we just enabled the proximity timeout for all tablets. Easier than playing whack-a-mole, doubly so because HUION re-uses USD ids so you can't easily identify them anyway.

Some tablets (HP Spectre 13) have proximity but never send it. So they advertise the capability, just don't generate events for it. Same handling as the ones that don't have proximity at all.

Some tablets (HUION) have proximity, but only send it once per plug-in, after that it's always in proximity. Since libinput may start after the first pen interaction, this means we have to a) query the initial state of the device and b) force proximity in/out based on the timer, just like above.

Some tablets (Lenovo Flex 5) sometimes send proximity out events, but sometimes do not. So for those we have a timer and forced proximity events, but only when our last interaction didn't trigger a proximity event.

The Dell Active Pen always sends a proximity out event, but with a delay of ~200ms. That timeout is longer than the libinput timeout so we'll get a proximity out event, but only after we've already forced proximity out. We can just discard that event.

The Dell Canvas pen (identifies as "Wacom HID 4831 Pen") can have random delays of up to ~800ms in its event reporting. Which would trigger forced proximity out events in libinput. Luckily it always sends proximity out events, so we could quirk out to specifically disable the timer.

The HP Envy x360 sends a proximity in for the pen, followed by a proximity in from the eraser in the next event. This is still an unresolved issue at the time of writing.

That's the current state of things, I'm sure it'll change in a few months time again as more devices decide to be creative. They are artist's tools after all.

The lesson to take away here: all of the above are special cases that need to be implemented but this can only be done on demand. There's no way any one person can test every single device out there and testing by vendors is often nonexistent. So if you want your device to work, don't complain on some random forum, file a bug and help with debugging and testing instead.

libinput and the Dell Canvas Totem

We're on the road to he^libinput 1.14 and last week I merged the Dell Canvas Totem support. "Wait, what?" I hear you ask, and "What is that?". Good question - but do pay attention to random press releases more. The Totem (Dell.com) is a round knob that can be placed on the Dell Canvas. Which itself is a pen and touch device, not unlike the Wacom Cintiq range if you're familiar with those (if not, there's always lmgtfy).

The totem's intended use is as secondary device - you place it on the screen while you're using the pen and up pops a radial menu. You can rotate the totem to select items, click it to select something and bang, you're smiling like a stock photo model eating lettuce. The radial menu is just an example UI, there are plenty others. I remember reading papers about bimanual interaction with similar interfaces that dated back to the 80s, so there's a plethora to choose from. I'm sure someone at Dell has written Totem-Pong and if they have not, I really question their job priorities. The technical side is quite simple, the totem triggers a set of touches in a specific configuration, when the firmware detects that arrangement it knows this isn't a finger but the totem.

Pen and touch we already handle well, but the totem required kernel changes and a few new interfaces in libinput. And that was the easy part, the actual UI bits will be nasty.

The kernel changes went into 4.19 and as usual you can throw noises of gratitude at Benjamin Tissoires. The new kernel API basically boils down to the ABS_MT_TOOL_TYPE axis sending MT_TOOL_DIAL whenever the totem is detected. That axis is (like others of the ABS_MT range) an odd one out. It doesn't work as an axis but rather an enum that specifies the tool within the current slot. We already had finger, pen and palm, adding another enum value means, well, now we have a "dial". And that's largely it in terms of API - handle the MT_TOOL_DIAL and you're good to go.

libinput's API is only slightly more complicated. The tablet interface has a new tool type called the LIBINPUT_TABLET_TOOL_TYPE_TOTEM and a new pair of axes for the tool, the size of the touch ellipse. With that you can get the position of the totem and the size (so you know how big the radial menu needs to be). And that's basically it in regards to the API. The actual implementation was a bit more involved, especially because we needed to implement location-based touch arbitration first.

I haven't started on the Wayland protocol additions yet but I suspect they'll look the same as the libinput API (the Wayland tablet protocol is itself virtually identical to the libinput API). The really big changes will of course be in the toolkits and the applications themselves. The totem is not a device that slots into existing UI paradigms, it requires dedicated support. Whether this will be available in your favourite application is likely going to be up to you. Anyway, christmas in July [1] is coming up so now you know what to put on your wishlist.

[1] yes, that's a thing. Apparently christmas with summery temperature, nice weather, sandy beaches is so unbearable that you have to re-create it in the misery of winter. Explains everything you need to know about humans, really.

Thursday, March 21, 2019

Using hexdump to print binary protocols

I had to work on an image yesterday where I couldn't install anything and the amount of pre-installed tools was quite limited. And I needed to debug an input device, usually done with libinput record. So eventually I found that hexdump supports formatting of the input bytes but it took me a while to figure out the right combination. The various resources online only got me partway there. So here's an explanation which should get you to your results quickly.

By default, hexdump prints identical input lines as a single line with an asterisk ('*'). To avoid this, use the -v flag as in the examples below.

hexdump's format string is single-quote-enclosed string that contains the count, element size and double-quote-enclosed printf-like format string. So a simple example is this:

$ hexdump -v -e '1/2 "%d\n"' 
-11643
23698
0
0
-5013
6
0
0
This prints 1 element ('iteration') of 2 bytes as integer, followed by a linebreak. Or in other words: it takes two bytes, converts it to int and prints it. If you want to print the same input value in multiple formats, use multiple -e invocations.
$ hexdump -v -e '1/2 "%d "' -e '1/2 "%x\n"' 
-11568 d2d0
23698 5c92
0 0
0 0
6355 18d3
1 1
0 0
This prints the same 2-byte input value, once as decimal signed integer, once as lowercase hex. If we have multiple identical things to print, we can do this:
$ hexdump -v -e '2/2 "%6d "' -e '" hex:"' -e '4/1 " %x"' -e '"\n"'
-10922  23698 hex: 56 d5 92 5c
     0      0 hex: 0 0 0 0
 14879      1 hex: 1f 3a 1 0
     0      0 hex: 0 0 0 0
     0      0 hex: 0 0 0 0
     0      0 hex: 0 0 0 0
Which prints two elements, each size 2 as integers, then the same elements as four 1-byte hex values, followed by a linebreak. %6d is a standard printf instruction and documented in the manual.

Let's go and print our protocol. The struct representing the protocol is this one:

struct input_event {
#if (__BITS_PER_LONG != 32 || !defined(__USE_TIME_BITS64)) && !defined(__KERNEL__)
        struct timeval time;
#define input_event_sec time.tv_sec
#define input_event_usec time.tv_usec
#else
        __kernel_ulong_t __sec;
#if defined(__sparc__) && defined(__arch64__)
        unsigned int __usec;
#else
        __kernel_ulong_t __usec;
#endif
#define input_event_sec  __sec
#define input_event_usec __usec
#endif
        __u16 type;
        __u16 code;
        __s32 value;
};
So we have two longs for sec and usec, two shorts for type and code and one signed 32-bit int. Let's print it:
$ hexdump -v -e '"E: " 1/8 "%u." 1/8 "%06u" 2/2 " %04x" 1/4 "%5d\n"' /dev/input/event22 
E: 1553127085.097503 0002 0000    1
E: 1553127085.097503 0002 0001   -1
E: 1553127085.097503 0000 0000    0
E: 1553127085.097542 0002 0001   -2
E: 1553127085.097542 0000 0000    0
E: 1553127085.108741 0002 0001   -4
E: 1553127085.108741 0000 0000    0
E: 1553127085.118211 0002 0000    2
E: 1553127085.118211 0002 0001  -10
E: 1553127085.118211 0000 0000    0
E: 1553127085.128245 0002 0000    1
And voila, we have our structs printed in the same format evemu-record prints out. So with nothing but hexdump, I can generate output I can then parse with my existing scripts on another box.

Friday, March 15, 2019

libinput's internal building blocks

Ho ho ho, let's write libinput. No, of course I'm not serious, because no-one in their right mind would utter "ho ho ho" without a sufficient backdrop of reindeers to keep them sane. So what this post is instead is me writing a nonworking fake libinput in Python, for the sole purpose of explaining roughly how libinput's architecture looks like. It'll be to the libinput what a Duplo car is to a Maserati. Four wheels and something to entertain the kids with but the queue outside the nightclub won't be impressed.

The target audience are those that need to hack on libinput and where the balance of understanding vs total confusion is still shifted towards the latter. So in order to make it easier to associate various bits, here's a description of the main building blocks.

libinput uses something resembling OOP except that in C you can't have nice things unless what you want is a buffer overflow\n\80xb1001af81a2b1101. Instead, we use opaque structs, each with accessor methods and an unhealthy amount of verbosity. Because Python does have classes, those structs are represented as classes below. This all won't be actual working Python code, I'm just using the syntax.

Let's get started. First of all, let's create our library interface.

class Libinput:
   @classmethod
   def path_create_context(cls):
        return _LibinputPathContext()

   @classmethod
   def udev_create_context(cls):
       return _LibinputUdevContext()

   # dispatch() means: read from all our internal fds and
   # call the dispatch method on anything that has changed
   def dispatch(self):
        for fd in self.epoll_fd.get_changed_fds():
           self.handlers[fd].dispatch()

   # return whatever the next event is
   def get_event(self):
        return self._events.pop(0)

   # the various _notify functions are internal API
   # to pass things up to the context
   def _notify_device_added(self, device):
        self._events.append(LibinputEventDevice(device))
        self._devices.append(device)

   def _notify_device_removed(self, device):
        self._events.append(LibinputEventDevice(device))
        self._devices.remove(device)

   def _notify_pointer_motion(self, x, y):
        self._events.append(LibinputEventPointer(x, y))



class _LibinputPathContext(Libinput):
   def add_device(self, device_node):
       device = LibinputDevice(device_node)
       self._notify_device_added(device)

   def remove_device(self, device_node):
       self._notify_device_removed(device)


class _LibinputUdevContext(Libinput):
   def __init__(self):
       self.udev = udev.context()

   def udev_assign_seat(self, seat_id):
       self.seat_id = seat.id

       for udev_device in self.udev.devices():
          device = LibinputDevice(udev_device.device_node)
          self._notify_device_added(device)


We have two different modes of initialisation, udev and path. The udev interface is used by Wayland compositors and adds all devices on the given udev seat. The path interface is used by the X.Org driver and adds only one specific device at a time. Both interfaces have the dispatch() and get_events() methods which is how every caller gets events out of libinput.

In both cases we create a libinput device from the data and create an event about the new device that bubbles up into the event queue.

But what really are events? Are they real or just a fidget spinner of our imagination? Well, they're just another object in libinput.

class LibinputEvent:
     @property
     def type(self):
        return self._type

     @property
     def context(self):
         return self._libinput
       
     @property
     def device(self):
        return self._device

     def get_pointer_event(self):
        if instanceof(self, LibinputEventPointer):
            return self  # This makes more sense in C where it's a typecast
        return None

     def get_keyboard_event(self):
        if instanceof(self, LibinputEventKeyboard):
            return self  # This makes more sense in C where it's a typecast
        return None


class LibinputEventPointer(LibinputEvent):
     @property
     def time(self)
        return self._time/1000

     @property
     def time_usec(self)
        return self._time

     @property
     def dx(self)
        return self._dx

     @property
     def absolute_x(self):
        return self._x * self._x_units_per_mm

     @property
     def absolute_x_transformed(self, width):
        return self._x *  width/ self._x_max_value
You get the gist. Each event is actually an event of a subtype with a few common shared fields and a bunch of type-specific ones. The events often contain some internal value that is calculated on request. For example, the API for the absolute x/y values returns mm, but we store the value in device units instead and convert to mm on request.

So, what's a device then? Well, just another I-cant-believe-this-is-not-a-class with relatively few surprises:

class LibinputDevice:
   class Capability(Enum):
       CAP_KEYBOARD = 0
       CAP_POINTER  = 1
       CAP_TOUCH    = 2
       ...

   def __init__(self, device_node):
      pass  # no-one instantiates this directly

   @property
   def name(self):
      return self._name

   @property
   def context(self):
      return self._libinput_context

   @property
   def udev_device(self):
      return self._udev_device

   @property
   def has_capability(self, cap):
      return cap in self._capabilities

   ...
Now we have most of the frontend API in place and you start to see a pattern. This is how all of libinput's API works, you get some opaque read-only objects with a few getters and accessor functions.

Now let's figure out how to work on the backend. For that, we need something that handles events:

class EvdevDevice(LibinputDevice):
    def __init__(self, device_node):
       fd = open(device_node)
       super().context.add_fd_to_epoll(fd, self.dispatch)
       self.initialize_quirks()

    def has_quirk(self, quirk):
        return quirk in self.quirks

    def dispatch(self):
       while True:
          data = fd.read(input_event_byte_count)
          if not data:
             break

          self.interface.dispatch_one_event(data)

    def _configure(self):
       # some devices are adjusted for quirks before we 
       # do anything with them
       if self.has_quirk(SOME_QUIRK_NAME):
           self.libevdev.disable(libevdev.EV_KEY.BTN_TOUCH)


       if 'ID_INPUT_TOUCHPAD' in self.udev_device.properties:
          self.interface = EvdevTouchpad()
       elif 'ID_INPUT_SWITCH' in self.udev_device.properties:
          self.interface = EvdevSwitch()
       ...
       else:
          self.interface = EvdevFalback()


class EvdevInterface:
    def dispatch_one_event(self, event):
        pass

class EvdevTouchpad(EvdevInterface):
    def dispatch_one_event(self, event):
        ...

class EvdevTablet(EvdevInterface):
    def dispatch_one_event(self, event):
        ...


class EvdevSwitch(EvdevInterface):
    def dispatch_one_event(self, event):
        ...

class EvdevFallback(EvdevInterface):
    def dispatch_one_event(self, event):
        ...
Our evdev device is actually a subclass (well, C, *handwave*) of the public device and its main function is "read things off the device node". And it passes that on to a magical interface. Other than that, it's a collection of generic functions that apply to all devices. The interfaces is where most of the real work is done.

The interface is decided on by the udev type and is where the device-specifics happen. The touchpad interface deals with touchpads, the tablet and switch interface with those devices and the fallback interface is that for mice, keyboards and touch devices (i.e. the simple devices).

Each interface has very device-specific event processing and can be compared to the Xorg synaptics vs wacom vs evdev drivers. If you are fixing a touchpad bug, chances are you only need to care about the touchpad interface.

The device quirks used above are another simple block:

class Quirks:
   def __init__(self):
       self.read_all_ini_files_from_directory('$PREFIX/share/libinput')

   def has_quirk(device, quirk):
       for file in self.quirks:
          if quirk.has_match(device.name) or
             quirk.has_match(device.usbid) or
             quirk.has_match(device.dmi):
             return True
       return False

   def get_quirk_value(device, quirk):
       if not self.has_quirk(device, quirk):
           return None

       quirk = self.lookup_quirk(device, quirk)
       if quirk.type == "boolean":
           return bool(quirk.value)
       if quirk.type == "string":
           return str(quirk.value)
       ...
A system that reads a bunch of .ini files, caches them and returns their value on demand. Those quirks are then used to adjust device behaviour at runtime.

The next building block is the "filter" code, which is the word we use for pointer acceleration. Here too we have a two-layer abstraction with an interface.

class Filter:
   def dispatch(self, x, y):
      # converts device-unit x/y into normalized units
      return self.interface.dispatch(x, y)

   # the 'accel speed' configuration value
   def set_speed(self, speed):
       return self.interface.set_speed(speed)

   # the 'accel speed' configuration value
   def get_speed(self):
       return self.speed

   ...


class FilterInterface:
   def dispatch(self, x, y):
       pass

class FilterInterfaceTouchpad:
   def dispatch(self, x, y):
       ...
       
class FilterInterfaceTrackpoint:
   def dispatch(self, x, y):
       ...

class FilterInterfaceMouse:
   def dispatch(self, x, y):
      self.history.push((x, y))
      v = self.calculate_velocity()
      f = self.calculate_factor(v)
      return (x * f, y * f)

   def calculate_velocity(self)
      for delta in self.history:
          total += delta
      velocity = total/timestamp  # as illustration only

   def calculate_factor(self, v):
      # this is where the interesting bit happens,
      # let's assume we have some magic function
      f = v * 1234/5678
      return f
So libinput calls filter_dispatch on whatever filter is configured and passes the result on to the caller. The setup of those filters is handled in the respective evdev interface, similar to this:
class EvdevFallback:
    ...
    def init_accel(self):
         if self.udev_type == 'ID_INPUT_TRACKPOINT':
             self.filter = FilterInterfaceTrackpoint()
         elif self.udev_type == 'ID_INPUT_TOUCHPAD':
             self.filter = FilterInterfaceTouchpad()
         ...
The advantage of this system is twofold. First, the main libinput code only needs one place where we really care about which acceleration method we have. And second, the acceleration code can be compiled separately for analysis and to generate pretty graphs. See the pointer acceleration docs. Oh, and it also allows us to easily have per-device pointer acceleration methods.

Finally, we have one more building block - configuration options. They're a bit different in that they're all similar-ish but only to make switching from one to the next a bit easier.

class DeviceConfigTap:
    def set_enabled(self, enabled):
       self._enabled = enabled

    def get_enabled(self):
        return self._enabled

    def get_default(self):
        return False

class DeviceConfigCalibration:
    def set_matrix(self, matrix):
       self._matrix = matrix

    def get_matrix(self):
        return self._matrix

    def get_default(self):
        return [1, 0, 0, 0, 1, 0, 0, 0, 1]
And then the devices that need one of those slot them into the right pointer in their structs:
class  EvdevFallback:
   ...
   def init_calibration(self):
      self.config_calibration = DeviceConfigCalibration()
      ...

   def handle_touch(self, x, y):
       if self.config_calibration is not None:
           matrix = self.config_calibration.get_matrix

       x, y = matrix.multiply(x, y)
       self.context._notify_pointer_abs(x, y)

And that's basically it, those are the building blocks libinput has. The rest is detail. Lots of it, but if you understand the architecture outline above, you're most of the way there in diving into the details.